- Expert Verified, Online, Free.
Question #251 Topic 1
An Amazon EC2 instance is located in a private subnet in a new VPC. This subnet does not have outbound internet access, but the EC2 instance needs the ability to download monthly security updates from an outside vendor.
What should a solutions architect do to meet these requirements?
A. Create an internet gateway, and attach it to the VPC. Configure the private subnet route table to use the internet gateway as the default route.
B. Create a NAT gateway, and place it in a public subnet. Configure the private subnet route table to use the NAT gateway as the default route.
C. Create a NAT instance, and place it in the same subnet where the EC2 instance is located. Configure the private subnet route table to use the NAT instance as the default route.
D. Create an internet gateway, and attach it to the VPC. Create a NAT instance, and place it in the same subnet where the EC2 instance is located. Configure the private subnet route table to use the internet gateway as the default route.
Community vote distribution
B (100%)
mhmt4438 Highly Voted 5 months, 2 weeks ago
B. Create a NAT gateway, and place it in a public subnet. Configure the private subnet route table to use the NAT gateway as the default route.
This approach will allow the EC2 instance to access the internet and download the monthly security updates while still being located in a private subnet. By creating a NAT gateway and placing it in a public subnet, it will allow the instances in the private subnet to access the internet through the NAT gateway. And then, configure the private subnet route table to use the NAT gateway as the default route. This will ensure that all outbound traffic is directed through the NAT gateway, allowing the EC2 instance to access the internet while still maintaining the security of the private subnet.
upvoted 5 times
Manjunathkb 2 months, 2 weeks ago
NAT gateway does not allow internet on it's own. It needs internet gateway too. None of the answers make sense
upvoted 2 times
Manjunathkb 2 months, 2 weeks ago
refer below link
https://aws.amazon.com/about-aws/whats-new/2021/06/aws-removes-nat-gateways-dependence-on-internet-gateway-for-private-communications/
upvoted 1 times
Bmarodi Most Recent 4 weeks ago
Option B meets the reqiurements, hence B is right choice.
upvoted 1 times
Manjunathkb 2 months, 2 weeks ago
D would have been the answer if NAT gateway is installed in public subnet and not where EC2 is located. None of the answers are correct.
upvoted 1 times
AlessandraSAA 3 months, 3 weeks ago
why not C?
upvoted 1 times
UnluckyDucky 3 months, 2 weeks ago
Because NAT Gateways are preferred over NAT Instances by AWS and in general.
I have yet to find a situation where a NAT Instance would be more applicable than NAT Gateway which is fully managed and is overall an easier solution to implement - both in AWS questions or the real world.
upvoted 1 times
TungPham 4 months, 2 weeks ago
techhb 5 months, 1 week ago
upvoted 1 times
techhb 5 months, 1 week ago
NAT Gateway is right choice
upvoted 1 times
bamishr 5 months, 2 weeks ago
upvoted 2 times
Question #252 Topic 1
A solutions architect needs to design a system to store client case files. The files are core company assets and are important. The number of files will grow over time.
The files must be simultaneously accessible from multiple application servers that run on Amazon EC2 instances. The solution must have built-in redundancy.
Which solution meets these requirements?
A. Amazon Elastic File System (Amazon EFS)
B. Amazon Elastic Block Store (Amazon EBS)
C. Amazon S3 Glacier Deep Archive
D. AWS Backup
Community vote distribution
A (100%)
Bmarodi 4 weeks ago
Option A meets the requirements, hence A is correct answer.
upvoted 1 times
moiraqi 1 month ago
What does "The solution must have built-in redundancy" mean
upvoted 1 times
KZM 4 months ago
If the application servers are running on Linux or UNIX operating systems, EFS is a the most suitable solution for the given requirements.
upvoted 1 times
TungPham 4 months, 2 weeks ago
"accessible from multiple application servers that run on Amazon EC2 instances"
upvoted 3 times
mhmt4438 5 months, 2 weeks ago
Aninina 5 months, 2 weeks ago
EFS Amazon Elastic File System (EFS) automatically grows and shrinks as you add and remove files with no need for management or provisioning.
upvoted 4 times
bamishr 5 months, 2 weeks ago
upvoted 1 times
Question #253 Topic 1
A solutions architect has created two IAM policies: Policy1 and Policy2. Both policies are attached to an IAM group.
A cloud engineer is added as an IAM user to the IAM group. Which action will the cloud engineer be able to perform?
A. Deleting IAM users
B. Deleting directories
C. Deleting Amazon EC2 instances
D. Deleting logs from Amazon CloudWatch Logs
Community vote distribution
C (100%)
JayBee65 Highly Voted 5 months ago
ec2:* Allows full control of EC2 instances, so C is correct
The policy only grants get and list permission on IAM users, so not A
ds:Delete deny denies delete-directory, so not B, see https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ds/index.html The policy only grants get and describe permission on logs, so not D
upvoted 8 times
Aninina Most Recent 5 months, 1 week ago
C : Deleting Amazon EC2 instances
upvoted 1 times
mhmt4438 5 months, 2 weeks ago
Answer is C
upvoted 2 times
Aninina 5 months, 2 weeks ago
C : Deleting Amazon EC2 instances
upvoted 1 times
bamishr 5 months, 2 weeks ago
upvoted 2 times
Morinator 5 months, 2 weeks ago
Explicite deny on directories, only available action for deleting is EC2
upvoted 2 times
Question #254 Topic 1
A company is reviewing a recent migration of a three-tier application to a VPC. The security team discovers that the principle of least privilege is not being applied to Amazon EC2 security group ingress and egress rules between the application tiers.
What should a solutions architect do to correct this issue?
A. Create security group rules using the instance ID as the source or destination.
B. Create security group rules using the security group ID as the source or destination.
C. Create security group rules using the VPC CIDR blocks as the source or destination.
D. Create security group rules using the subnet CIDR blocks as the source or destination.
Community vote distribution
B (100%)
Aninina Highly Voted 5 months, 2 weeks ago
B. Create security group rules using the security group ID as the source or destination.
This way, the security team can ensure that the least privileged access is given to the application tiers by allowing only the necessary communication between the security groups. For example, the web tier security group should only allow incoming traffic from the load balancer security group and outgoing traffic to the application tier security group. This approach provides a more granular and secure way to control traffic between the different tiers of the application and also allows for easy modification of access if needed.
It's also worth noting that it's good practice to minimize the number of open ports and protocols, and use security groups as a first line of defense, in addition to network access control lists (ACLs) to control traffic between subnets.
upvoted 5 times
Wael216 Highly Voted 3 months, 4 weeks ago
By using security group IDs, the ingress and egress rules can be restricted to only allow traffic from the necessary source or destination, and to deny all other traffic. This ensures that only the minimum required traffic is allowed between the application tiers.
Option A is not the best choice because using the instance ID as the source or destination would allow traffic from any instance with that ID, which may not be limited to the specific application tier.
Option C is also not the best choice because using VPC CIDR blocks would allow traffic from any IP address within the VPC, which may not be limited to the specific application tier.
Option D is not the best choice because using subnet CIDR blocks would allow traffic from any IP address within the subnet, which may not be limited to the specific application tier.
upvoted 5 times
Bmarodi Most Recent 4 weeks ago
I vote for option B.
upvoted 1 times
LuckyAro 5 months, 1 week ago
. Create security group rules using the security group ID as the source or destination
upvoted 1 times
techhb 5 months, 1 week ago
Security Group Rulesapply to instances https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-rules.html
upvoted 1 times
mhmt4438 5 months, 2 weeks ago
bamishr 5 months, 2 weeks ago
upvoted 1 times
Morinator 5 months, 2 weeks ago
B right
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-rules.html
upvoted 1 times
Question #255 Topic 1
A company has an ecommerce checkout workflow that writes an order to a database and calls a service to process the payment. Users are
experiencing timeouts during the checkout process. When users resubmit the checkout form, multiple unique orders are created for the same desired transaction.
How should a solutions architect refactor this workflow to prevent the creation of multiple orders?
A. Configure the web application to send an order message to Amazon Kinesis Data Firehose. Set the payment service to retrieve the message from Kinesis Data Firehose and process the order.
B. Create a rule in AWS CloudTrail to invoke an AWS Lambda function based on the logged application path request. Use Lambda to query the database, call the payment service, and pass in the order information.
C. Store the order in the database. Send a message that includes the order number to Amazon Simple Notification Service (Amazon SNS). Set the payment service to poll Amazon SNS, retrieve the message, and process the order.
D. Store the order in the database. Send a message that includes the order number to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Set the payment service to retrieve the message and process the order. Delete the message from the queue.
Community vote distribution
D (100%)
Aninina Highly Voted 5 months, 2 weeks ago
D. Store the order in the database. Send a message that includes the order number to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Set the payment service to retrieve the message and process the order. Delete the message from the queue.
This approach ensures that the order creation and payment processing steps are separate and atomic. By sending the order information to an SQS FIFO queue, the payment service can process the order one at a time and in the order they were received. If the payment service is unable to process an order, it can be retried later, preventing the creation of multiple orders. The deletion of the message from the queue after it is processed will prevent the same message from being processed multiple times.
It's worth noting that FIFO queues guarantee that messages are processed in the order they are received, and prevent duplicates.
upvoted 6 times
antropaws Most Recent 6 days, 12 hours ago
Why not A?
upvoted 1 times
Wael216 3 months, 4 weeks ago
The use of a FIFO queue in Amazon SQS ensures that messages are processed in the order they are received.
upvoted 1 times
mhmt4438 5 months, 2 weeks ago
upvoted 3 times
bamishr 5 months, 2 weeks ago
Question #256 Topic 1
A solutions architect is implementing a document review application using an Amazon S3 bucket for storage. The solution must prevent
accidental deletion of the documents and ensure that all versions of the documents are available. Users must be able to download, modify, and upload documents.
Which combination of actions should be taken to meet these requirements? (Choose two.)
Enable a read-only bucket ACL.
Enable versioning on the bucket.
Attach an IAM policy to the bucket.
Enable MFA Delete on the bucket.
Encrypt the bucket using AWS KMS.
Community vote distribution
BD (100%)
Bmarodi 4 weeks ago
Options B & D are the correct answers.
upvoted 1 times
Wael216 3 months, 4 weeks ago
MinHyeok 4 months, 2 weeks ago
아몰랑 ㅇㅁㄹㅇㅁㄹ
upvoted 3 times
akdavsan 5 months, 1 week ago
LuckyAro 5 months, 1 week ago
david76x 5 months, 1 week ago
Aninina 5 months, 1 week ago
B and D for sure guys
upvoted 2 times
mhmt4438 5 months, 2 weeks ago
upvoted 2 times
Question #257 Topic 1
A company is building a solution that will report Amazon EC2 Auto Scaling events across all the applications in an AWS account. The company needs to use a serverless solution to store the EC2 Auto Scaling status data in Amazon S3. The company then will use the data in Amazon S3 to provide near-real-time updates in a dashboard. The solution must not affect the speed of EC2 instance launches.
How should the company move the data to Amazon S3 to meet these requirements?
Use an Amazon CloudWatch metric stream to send the EC2 Auto Scaling status data to Amazon Kinesis Data Firehose. Store the data in Amazon S3.
Launch an Amazon EMR cluster to collect the EC2 Auto Scaling status data and send the data to Amazon Kinesis Data Firehose. Store the data in Amazon S3.
Create an Amazon EventBridge rule to invoke an AWS Lambda function on a schedule. Configure the Lambda function to send the EC2 Auto Scaling status data directly to Amazon S3.
Use a bootstrap script during the launch of an EC2 instance to install Amazon Kinesis Agent. Configure Kinesis Agent to collect the EC2 Auto Scaling status data and send the data to Amazon Kinesis Data Firehose. Store the data in Amazon S3.
Community vote distribution
A (75%) C (25%)
markw92 1 week, 1 day ago
A: I was thinking D is the answer but the solution should not impact ec2 launches will make the difference and i fast read the question. A is a right choice.
upvoted 1 times
Rahulbit34 1 month, 2 weeks ago
A because of near real time scenario
upvoted 3 times
UnluckyDucky 3 months, 2 weeks ago
Both A and C are applicable - no doubt there.
C is more straightforward and to the point of the question imho.
upvoted 2 times
UnluckyDucky 3 months, 2 weeks ago
Changing my answer to *A* as the dashboard will provide near-real updates.
Unless the lambda is configured to run every minute which is not common with schedules - it is not considered near real-time.
upvoted 3 times
bdp123 4 months, 2 weeks ago
Serverless solution and near real time
upvoted 2 times
Stanislav4907 4 months, 2 weeks ago
near real time -eliminates c
upvoted 1 times
aakashkumar1999 4 months, 3 weeks ago
devonwho 4 months, 4 weeks ago
You can use metric streams to continually stream CloudWatch metrics to a destination of your choice, with near-real-time delivery and low latency. One of the use cases is Data Lake: create a metric stream and direct it to an Amazon Kinesis Data Firehose delivery stream that delivers your CloudWatch metrics to a data lake such as Amazon S3.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Metric-Streams.html
upvoted 2 times
Stanislav4907 5 months ago
Option C, using an Amazon EventBridge rule to invoke an AWS Lambda function on a schedule to send the EC2 Auto Scaling status data directly to Amazon S3, may not be the best choice because it may not provide real-time updates to the dashboard.
A schedule-based approach with an EventBridge rule and Lambda function may not be able to deliver the data in near real-time, as the EC2 Auto Scaling status data is generated dynamically and may not always align with the schedule set by the EventBridge rule.
Additionally, using a schedule-based approach with EventBridge and Lambda also has the potential to create latency, as there may be a delay between the time the data is generated and the time it is sent to S3.
In this scenario, using Amazon CloudWatch and Kinesis Data Firehose as described in Option A, provides a more reliable and near real-time solution.
upvoted 1 times
MikelH93 5 months ago
A seems to be the right answer. Don't think C could be correct as it says "near real-time" and C is on schedule
upvoted 1 times
KAUS2 5 months ago
C. Create an Amazon EventBridge rule to invoke an AWS Lambda function on a schedule. Configure the Lambda function to send the EC2 Auto Scaling status data directly to Amazon S3.
upvoted 1 times
techhb 5 months, 1 week ago
A seemsright choice but serverless keyword confuses,and cloud watch metric steam is server less too.
upvoted 2 times
Aninina 5 months, 1 week ago
A. Use an Amazon CloudWatch metric stream to send the EC2 Auto Scaling status data to Amazon Kinesis Data Firehose. Store the data in Amazon S3.
upvoted 2 times
mhmt4438 5 months, 2 weeks ago
C. Create an Amazon EventBridge rule to invoke an AWS Lambda function on a schedule. Configure the Lambda function to send the EC2 Auto Scaling status data directly to Amazon S3.
This approach will use a serverless solution (AWS Lambda) which will not affect the speed of EC2 instance launches. It will use the EventBridge rule to invoke the Lambda function on schedule to send the data to S3. This will meet the requirement of near-real-time updates in a dashboard as well. The Lambda function can be triggered by CloudWatch events that are emitted when Auto Scaling events occur. The function can then collect the necessary data and store it in S3. This direct sending of data to S3 will reduce the number of steps and hence it is more efficient and cost-effective.
upvoted 2 times
Aninina 5 months, 1 week ago ChatGPT is not correct here upvoted 3 times
Parsons 5 months, 2 weeks ago
A is the correct answer. "near-real-time" => A & D
"The solution must not affect the speed of EC2 instance launches." => D is an incorrect
upvoted 2 times
bamishr 5 months, 2 weeks ago
upvoted 1 times
Question #258 Topic 1
A company has an application that places hundreds of .csv files into an Amazon S3 bucket every hour. The files are 1 GB in size. Each time a file is uploaded, the company needs to convert the file to Apache Parquet format and place the output file into an S3 bucket.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an AWS Lambda function to download the .csv files, convert the files to Parquet format, and place the output files in an S3 bucket. Invoke the Lambda function for each S3 PUT event.
B. Create an Apache Spark job to read the .csv files, convert the files to Parquet format, and place the output files in an S3 bucket. Create an AWS Lambda function for each S3 PUT event to invoke the Spark job.
C. Create an AWS Glue table and an AWS Glue crawler for the S3 bucket where the application places the .csv files. Schedule an AWS Lambda function to periodically use Amazon Athena to query the AWS Glue table, convert the query results into Parquet format, and place the output files into an S3 bucket.
D. Create an AWS Glue extract, transform, and load (ETL) job to convert the .csv files to Parquet format and place the output files into an S3 bucket. Create an AWS Lambda function for each S3 PUT event to invoke the ETL job.
Community vote distribution
D (82%) A (18%)
Parsons Highly Voted 5 months, 2 weeks ago
No, D should be correct.
"LEAST operational overhead" => Should you fully manage service like Glue instead of manually like the answer A.
upvoted 10 times
F629 Most Recent 1 week, 2 days ago
Both A and D can works, but A is more simple. It's more close to the "Least Operational effort".
upvoted 1 times
shanwford 2 months, 3 weeks ago
The maximum size for a Lambda event payload is 256 KB - so (A) didn't work with 1GB Files. Glue is recommended for the Parquet Transformation of AWS.
upvoted 2 times
jennyka76 4 months, 2 weeks ago
ANS - d
https://aws.amazon.com/blogs/database/how-to-extract-transform-and-load-data-for-analytic-processing-using-aws-glue-part-2/
- READ ARTICLE -
upvoted 2 times
aws4myself 5 months ago
Here A is the correct answer. The reason here is the least operational overhead. A ==> S3 - Lambda - S3
D ==> S3 - Lambda - Glue - S3
Also, glue cannot convert on fly automatically, you need to write some code there. If you write the same code in lambda it will convert the same and push the file to S3
Lambda has max memory of 128 MB to 10 GB. So, it can handle it easily.
And we need to consider cost also, glue cost is more. Hope many from this forum realize these differences.
upvoted 4 times
nder 4 months ago
Cost is not a factor. AWS Glue is a fully managed service therefore, it's the least operational overhead
upvoted 2 times
LuckyAro 4 months, 4 weeks ago
We also need to stay with the question, cost was not a consideration in the question.
upvoted 1 times
JayBee65 5 months ago
A is unlikely to work as Lambda may struggle with 1GB size: "< 64 MB, beyond which lambda is likely to hit memory caps", see https://stackoverflow.com/questions/41504095/creating-a-parquet-file-on-aws-lambda-function
upvoted 2 times
jainparag1 5 months, 1 week ago
Should be D as Glue is self managed service and provides tel job for converting cab files to parquet off the shelf.
upvoted 1 times
Joxtat 5 months, 1 week ago
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/three-aws-glue-etl-job-types-for-converting-data-to-apache-parquet.html
upvoted 1 times
techhb 5 months, 1 week ago AWS Glue is right solution here. upvoted 1 times
mp165 5 months, 1 week ago
I am thinking D.
A says lambda will download the .csv...but to where? that seem manual based on that
upvoted 1 times
mhmt4438 5 months, 2 weeks ago
bamishr 5 months, 2 weeks ago
upvoted 1 times
Question #259 Topic 1
A company is implementing new data retention policies for all databases that run on Amazon RDS DB instances. The company must retain daily backups for a minimum period of 2 years. The backups must be consistent and restorable.
Which solution should a solutions architect recommend to meet these requirements?
A. Create a backup vault in AWS Backup to retain RDS backups. Create a new backup plan with a daily schedule and an expiration period of 2 years after creation. Assign the RDS DB instances to the backup plan.
B. Configure a backup window for the RDS DB instances for daily snapshots. Assign a snapshot retention policy of 2 years to each RDS DB instance. Use Amazon Data Lifecycle Manager (Amazon DLM) to schedule snapshot deletions.
C. Configure database transaction logs to be automatically backed up to Amazon CloudWatch Logs with an expiration period of 2 years.
D. Configure an AWS Database Migration Service (AWS DMS) replication task. Deploy a replication instance, and configure a change data capture (CDC) task to stream database changes to Amazon S3 as the target. Configure S3 Lifecycle policies to delete the snapshots after 2 years.
Community vote distribution
A (100%)
markw92 1 week, 1 day ago
Why not B?
upvoted 1 times
_deepsi_dee29 1 month ago
antropaws 1 month ago
Why not D?
Creating tasks for ongoing replication using AWS DMS: You can create an AWS DMS task that captures ongoing changes from the source data store. You can do this capture while you are migrating your data. You can also create a task that captures ongoing changes after you complete your initial (full-load) migration to a supported target data store. This process is called ongoing replication or change data capture (CDC). AWS DMS uses this process when replicating ongoing changes from a source data store.
upvoted 1 times
gold4otas 3 months ago
A. Create a backup vault in AWS Backup to retain RDS backups. Create a new backup plan with a daily schedule and an expiration period of 2 years after creation. Assign the RDS DB instances to the backup plan.
upvoted 1 times
techhb 5 months, 1 week ago
Aninina 5 months, 1 week ago
A A A A A A
upvoted 2 times
mhmt4438 5 months, 2 weeks ago
bamishr 5 months, 2 weeks ago
Create a backup vault in AWS Backup to retain RDS backups. Create a new backup plan with a daily schedule and an expiration period of 2 years after creation. Assign the RDS DB instances to the backup plan.
upvoted 4 times
Question #260 Topic 1
A company’s compliance team needs to move its file shares to AWS. The shares run on a Windows Server SMB file share. A self-managed onpremises Active Directory controls access to the files and folders.
The company wants to use Amazon FSx for Windows File Server as part of the solution. The company must ensure that the on-premises Active Directory groups restrict access to the FSx for Windows File Server SMB compliance shares, folders, and files after the move to AWS. The company has created an FSx for Windows File Server file system.
Which solution will meet these requirements?
A. Create an Active Directory Connector to connect to the Active Directory. Map the Active Directory groups to IAM groups to restrict access.
B. Assign a tag with a Restrict tag key and a Compliance tag value. Map the Active Directory groups to IAM groups to restrict access.
C. Create an IAM service-linked role that is linked directly to FSx for Windows File Server to restrict access.
D. Join the file system to the Active Directory to restrict access.
Community vote distribution
D (84%) A (16%)
mhmt4438 Highly Voted 5 months, 2 weeks ago
D. Join the file system to the Active Directory to restrict access.
Joining the FSx for Windows File Server file system to the on-premises Active Directory will allow the company to use the existing Active Directory groups to restrict access to the file shares, folders, and files after the move to AWS. This option allows the company to continue using their existing access controls and management structure, making the transition to AWS more seamless.
upvoted 11 times
kraken21 Most Recent 2 months, 4 weeks ago
Other options are referring to IAM based control which is not possible. Existing AD should be used without IAM.
upvoted 1 times
Abhineet9148232 3 months, 1 week ago
https://aws.amazon.com/blogs/storage/using-amazon-fsx-for-windows-file-server-with-an-on-premises-active-directory/
upvoted 2 times
somsundar 3 months, 2 weeks ago
Answer D. Amazon FSx does not support Active Directory Connector .
upvoted 2 times
Abhineet9148232 3 months, 3 weeks ago
https://docs.aws.amazon.com/fsx/latest/WindowsGuide/self-managed-AD.html
upvoted 2 times
Yelizaveta 4 months, 1 week ago
Note:
Amazon FSx does not support Active Directory Connector and Simple Active Directory.
https://docs.aws.amazon.com/fsx/latest/WindowsGuide/aws-ad-integration-fsxW.html
upvoted 3 times
aakashkumar1999 4 months, 3 weeks ago
The answer will be AD connector so : A, it will create a proxy between your onpremises AD which you can use to restrict access
upvoted 2 times
Stanislav4907 5 months ago
Option D: Join the file system to the Active Directory to restrict access.
Joining the FSx for Windows File Server file system to the on-premises Active Directory allows the company to use the existing Active Directory groups to restrict access to the file shares, folders, and files after the move to AWS. By joining the file system to the Active Directory, the company can maintain the same access control as before the move, ensuring that the compliance team can maintain compliance with the relevant regulations and standards.
Options A and B involve creating an Active Directory Connector or assigning a tag to map the Active Directory groups to IAM groups, but these options do not allow for the use of the existing Active Directory groups to restrict access to the file shares in AWS.
Option C involves creating an IAM service-linked role linked directly to FSx for Windows File Server to restrict access, but this option does not take advantage of the existing on-premises Active Directory and its access control.
upvoted 3 times
KAUS2 5 months ago
A is correct
Use AD Connector if you only need to allow your on-premises users to log in to AWS applications and services with their Active Directory credentials. You can also use AD Connector to join Amazon EC2 instances to your existing Active Directory domain.
Pls refer - https://docs.aws.amazon.com/directoryservice/latest/admin-guide/what_is.html#adconnector
upvoted 3 times
mbuck2023 2 weeks, 5 days ago
wrong, answer is D. Amazon FSx does not support Active Directory Connector and Simple Active Directory. See also https://docs.aws.amazon.com/fsx/latest/WindowsGuide/self-managed-AD.html.
upvoted 1 times
techhb 5 months, 1 week ago
Going with D here
upvoted 1 times
Aninina 5 months, 2 weeks ago
D. Join the file system to the Active Directory to restrict access.
The best way to restrict access to the FSx for Windows File Server SMB compliance shares, folders, and files after the move to AWS is to join the file system to the on-premises Active Directory. This will allow the company to continue using the Active Directory groups to restrict access to the files and folders, without the need to create additional IAM groups or roles.
By joining the file system to the Active Directory, the company can continue to use the same access control mechanisms it already has in place and the security configuration will not change.
Option A and B are not applicable to FSx for Windows File Server because it doesn't support the use of IAM groups or tags to restrict access. Option C is not appropriate in this case because FSx for Windows File Server does not support using IAM service-linked roles to restrict access.
upvoted 4 times
Question #261 Topic 1
A company recently announced the deployment of its retail website to a global audience. The website runs on multiple Amazon EC2 instances behind an Elastic Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones.
The company wants to provide its customers with different versions of content based on the devices that the customers use to access the website.
Which combination of actions should a solutions architect take to meet these requirements? (Choose two.)
Configure Amazon CloudFront to cache multiple versions of the content.
Configure a host header in a Network Load Balancer to forward traffic to different instances.
Configure a Lambda@Edge function to send specific objects to users based on the User-Agent header.
Configure AWS Global Accelerator. Forward requests to a Network Load Balancer (NLB). Configure the NLB to set up host-based routing to different EC2 instances.
Configure AWS Global Accelerator. Forward requests to a Network Load Balancer (NLB). Configure the NLB to set up path-based routing to different EC2 instances.
Community vote distribution
AC (100%)
Parsons Highly Voted 5 months, 2 weeks ago
A, C is correct.
NLB lister rule only supports Protocol & Port (Not host/based routing like ALB) => D, E is incorrect. NLB just works layer 4 (TCP/UDP) instead of Layer 7 (HTTP) => B is incorrect.
After eliminating, AC should be the answer.
upvoted 9 times
Yadav_Sanjay Most Recent 1 month, 1 week ago
NLB does not supports routing
upvoted 1 times
omoakin 1 month, 2 weeks ago
A C
Configure Amazon CloudFront to cache multiple versions of the content.
Configure a [email protected] function to send specific objects to users based on the User-Agent header.
upvoted 1 times
omoakin 1 month, 2 weeks ago
C
Configure a [email protected] function to send specific objects to users based on the User-Agent header.
upvoted 1 times
GalileoEC2 3 months, 1 week ago
Using a Directory Connector to connect the on-premises Active Directory to AWS is one way to enable access to AWS resources, including Amazon FSx for Windows File Server. However, joining the Amazon FSx for Windows File Server file system to the on-premises Active Directory is a separate step that allows you to control access to the file shares using the same Active Directory groups that are used on-premises.
upvoted 1 times
LoXeras 3 months, 1 week ago
I guess this belongs to the question before #260
upvoted 2 times
wors 4 months, 2 weeks ago
So will this mean the entire architecture needs to move to lambda in order to leverage off lambda edge? This doesn't make sense as the question outlines the architecture already in ec2, asg and elb?
Just looking for clarification if I am missing something
upvoted 1 times
devonwho 4 months, 4 weeks ago
AC are the correct answers.
For C:
IMPROVED USER EXPERIENCE
Lambda@Edge can help improve your users' experience with your websites and web applications across the world, by letting you personalize content for them without sacrificing performance.
Real-time Image Transformation
You can customize your users' experience by transforming images on the fly based on the user characteristics. For example, you can resize images based on the viewer's device type—mobile, desktop, or tablet. You can also cache the transformed images at CloudFront Edge locations to further improve performance when delivering images.
https://aws.amazon.com/lambda/edge/
upvoted 2 times
mhmt4438 5 months, 2 weeks ago
Aninina 5 months, 2 weeks ago
C. Configure a Lambda@Edge function to send specific objects to users based on the User-Agent header.
Lambda@Edge allows you to run a Lambda function in response to specific CloudFront events, such as a viewer request, an origin request, a response, or a viewer response.
upvoted 2 times
Morinator 5 months, 2 weeks ago
upvoted 2 times
Question #262 Topic 1
A company plans to use Amazon ElastiCache for its multi-tier web application. A solutions architect creates a Cache VPC for the ElastiCache cluster and an App VPC for the application’s Amazon EC2 instances. Both VPCs are in the us-east-1 Region.
The solutions architect must implement a solution to provide the application’s EC2 instances with access to the ElastiCache cluster. Which solution will meet these requirements MOST cost-effectively?
A. Create a peering connection between the VPCs. Add a route table entry for the peering connection in both VPCs. Configure an inbound rule for the ElastiCache cluster’s security group to allow inbound connection from the application’s security group.
B. Create a Transit VPC. Update the VPC route tables in the Cache VPC and the App VPC to route traffic through the Transit VPC. Configure an inbound rule for the ElastiCache cluster's security group to allow inbound connection from the application’s security group.
C. Create a peering connection between the VPCs. Add a route table entry for the peering connection in both VPCs. Configure an inbound rule for the peering connection’s security group to allow inbound connection from the application’s security group.
D. Create a Transit VPC. Update the VPC route tables in the Cache VPC and the App VPC to route traffic through the Transit VPC. Configure an inbound rule for the Transit VPC’s security group to allow inbound connection from the application’s security group.
Community vote distribution
A (100%)
mhmt4438 Highly Voted 5 months, 2 weeks ago
A. Create a peering connection between the VPCs. Add a route table entry for the peering connection in both VPCs. Configure an inbound rule for the ElastiCache cluster’s security group to allow inbound connection from the application’s security group.
Creating a peering connection between the VPCs allows the application's EC2 instances to communicate with the ElastiCache cluster directly and efficiently. This is the most cost-effective solution as it does not involve creating additional resources such as a Transit VPC, and it does not incur additional costs for traffic passing through the Transit VPC. Additionally, it is also more secure as it allows you to configure a more restrictive security group rule to allow inbound connection from only the application's security group.
upvoted 10 times
smartegnine Most Recent 2 weeks ago
A is correct,
VPC transit is used for more complex architecture and can do VPCs to VPCs connectivity. But for simple VPC 2 VPC can use peer connection.
To enable private IPv4 traffic between instances in peered VPCs, you must add a route to the route tables associated with the subnets for both instances.
So base on 1, B and D are out, base on 2 C is out
upvoted 1 times
wRhlH 3 weeks ago
Why not C ? any explanation?
upvoted 1 times
smartegnine 1 week, 4 days ago
Application read from ElasticCache, not viseversa, so inbound rule should be ElasticCach
upvoted 2 times
Cor5in 1 day, 19 hours ago
Thank you Sir!
upvoted 1 times
smartegnine 2 weeks ago
To enable private IPv4 traffic between instances in peered VPCs, you must add a route to the route tables associated with the subnets for both instances.
https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-routing.html
upvoted 1 times
nder 4 months ago
Cost Effectively!
upvoted 1 times
Question #263 Topic 1
A company is building an application that consists of several microservices. The company has decided to use container technologies to deploy its software on AWS. The company needs a solution that minimizes the amount of ongoing effort for maintenance and scaling. The company cannot manage additional infrastructure.
Which combination of actions should a solutions architect take to meet these requirements? (Choose two.)
Deploy an Amazon Elastic Container Service (Amazon ECS) cluster.
Deploy the Kubernetes control plane on Amazon EC2 instances that span multiple Availability Zones.
Deploy an Amazon Elastic Container Service (Amazon ECS) service with an Amazon EC2 launch type. Specify a desired task number level of greater than or equal to 2.
Deploy an Amazon Elastic Container Service (Amazon ECS) service with a Fargate launch type. Specify a desired task number level of greater than or equal to 2.
Deploy Kubernetes worker nodes on Amazon EC2 instances that span multiple Availability Zones. Create a deployment that specifies two or more replicas for each microservice.
Community vote distribution
AD (100%)
LoXeras 3 months, 1 week ago
AWS Farget is server less solution to use on ECS: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html
upvoted 2 times
lambda15 3 months, 1 week ago
why is c is incorrect ?
upvoted 1 times
Julio98 3 months, 1 week ago
Because in the question says, "minimizes the amount of ongoing effort for maintenance and scaling", and EC2 instances you need effort to maintain the infrastructure unlike fargate that is serverless.
upvoted 2 times
WherecanIstart 3 months, 2 weeks ago
Amazon Fargate is a service that is fully manageable by Amazon; it offers provisioning, configuration and scaling feature. It is "serverless"..
upvoted 1 times
AlessandraSAA 3 months, 3 weeks ago
ECS has 2 launch type, EC2 (you maintain the infra) and Fargate (serverless). Since the question ask for no additional infra to manage it should be Fargate.
upvoted 2 times
devonwho 4 months, 4 weeks ago
AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. With Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers.
https://docs.aws.amazon.com/AmazonECS/latest/userguide/what-is-fargate.html
upvoted 3 times
Aninina 5 months, 1 week ago A D is the correct answer upvoted 1 times
mhmt4438 5 months, 2 weeks ago
A,D is correct answer
upvoted 2 times
AHUI 5 months, 2 weeks ago
AD:
upvoted 2 times
Morinator 5 months, 2 weeks ago
AD - EC2 out for this, cluster + fargate is the right answer
upvoted 3 times
Question #264 Topic 1
A company has a web application hosted over 10 Amazon EC2 instances with traffic directed by Amazon Route 53. The company occasionally
experiences a timeout error when attempting to browse the application. The networking team finds that some DNS queries return IP addresses of unhealthy instances, resulting in the timeout error.
What should a solutions architect implement to overcome these timeout errors?
Create a Route 53 simple routing policy record for each EC2 instance. Associate a health check with each record.
Create a Route 53 failover routing policy record for each EC2 instance. Associate a health check with each record.
Create an Amazon CloudFront distribution with EC2 instances as its origin. Associate a health check with the EC2 instances.
Create an Application Load Balancer (ALB) with a health check in front of the EC2 instances. Route to the ALB from Route 53.
Community vote distribution
D (79%) 14% 7%
joechen2023 1 week, 1 day ago
I believe both C and D will work, but C seems less complex.
hopefully somebody here is more advanced(not an old student learning AWS like me) to explain why not C.
upvoted 1 times
Abrar2022 3 weeks, 6 days ago
Option D allows for the creation of an Application Load Balancer which can detect unhealthy instances and redirect traffic away from them.
upvoted 2 times
Steve_4542636 3 months, 4 weeks ago
I vote d
upvoted 1 times
techhb 5 months, 1 week ago
techhb 5 months, 1 week ago
Why not B
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-types.html#dns-failover-types-active-passive
upvoted 2 times
techhb 5 months, 1 week ago
Its D,found the root cause
Option B is not the best option to overcome these timeout errors because it is not designed to handle traffic directed by Amazon Route 53. Option B creates a failover routing policy record for each EC2 instance, which is designed to route traffic to a backup EC2 instance if one of the EC2 instances becomes unhealthy. This is not ideal for routing traffic from Route 53 as it does not allow for the redirection of traffic away from unhealthy instances. Option D would be the best choice as it allows for the creation of an Application Load Balancer which can detect unhealthy instances and redirect traffic away from them.
upvoted 4 times
F629 1 week, 2 days ago
I think the problem of Failover routing policy is that it always send the requests to the same primary instance, not spread into all healthy instances.
upvoted 1 times
Aninina 5 months, 1 week ago
mhmt4438 5 months, 2 weeks ago
AHUI 5 months, 2 weeks ago
Ans: D
upvoted 1 times
Aninina 5 months, 2 weeks ago
D. Create an Application Load Balancer (ALB) with a health check in front of the EC2 instances. Route to the ALB from Route 53.
An Application Load Balancer (ALB) allows you to distribute incoming traffic across multiple backend instances, and can automatically route traffic to healthy instances while removing traffic from unhealthy instances. By using an ALB in front of the EC2 instances and routing traffic to it from Route 53, the load balancer can perform health checks on the instances and only route traffic to healthy instances, which should help to reduce or eliminate timeout errors caused by unhealthy instances.
upvoted 4 times
Question #265 Topic 1
A solutions architect needs to design a highly available application consisting of web, application, and database tiers. HTTPS content delivery should be as close to the edge as possible, with the least delivery time.
Which solution meets these requirements and is MOST secure?
A. Configure a public Application Load Balancer (ALB) with multiple redundant Amazon EC2 instances in public subnets. Configure Amazon CloudFront to deliver HTTPS content using the public ALB as the origin.
B. Configure a public Application Load Balancer with multiple redundant Amazon EC2 instances in private subnets. Configure Amazon CloudFront to deliver HTTPS content using the EC2 instances as the origin.
C. Configure a public Application Load Balancer (ALB) with multiple redundant Amazon EC2 instances in private subnets. Configure Amazon CloudFront to deliver HTTPS content using the public ALB as the origin.
D. Configure a public Application Load Balancer with multiple redundant Amazon EC2 instances in public subnets. Configure Amazon CloudFront to deliver HTTPS content using the EC2 instances as the origin.
Community vote distribution
C (100%)
Aninina Highly Voted 5 months, 2 weeks ago
C. Configure a public Application Load Balancer (ALB) with multiple redundant Amazon EC2 instances in private subnets. Configure Amazon CloudFront to deliver HTTPS content using the public ALB as the origin.
This solution meets the requirements for a highly available application with web, application, and database tiers, as well as providing edge-based content delivery. Additionally, it maximizes security by having the ALB in a private subnet, which limits direct access to the web servers, while still being able to serve traffic over the Internet via the public ALB. This will ensure that the web servers are not exposed to the public Internet, which reduces the attack surface and provides a secure way to access the application.
upvoted 10 times
mhmt4438 Most Recent 5 months, 2 weeks ago
AHUI 5 months, 2 weeks ago
ans: C
upvoted 1 times
Morinator 5 months, 2 weeks ago
Instances in private, ALB in public, point cloudfront to the public ALB
upvoted 3 times
Question #266 Topic 1
A company has a popular gaming platform running on AWS. The application is sensitive to latency because latency can impact the user
experience and introduce unfair advantages to some players. The application is deployed in every AWS Region. It runs on Amazon EC2 instances
that are part of Auto Scaling groups configured behind Application Load Balancers (ALBs). A solutions architect needs to implement a mechanism to monitor the health of the application and redirect traffic to healthy endpoints.
Which solution meets these requirements?
A. Configure an accelerator in AWS Global Accelerator. Add a listener for the port that the application listens on, and attach it to a Regional endpoint in each Region. Add the ALB as the endpoint.
B. Create an Amazon CloudFront distribution and specify the ALB as the origin server. Configure the cache behavior to use origin cache headers. Use AWS Lambda functions to optimize the traffic.
C. Create an Amazon CloudFront distribution and specify Amazon S3 as the origin server. Configure the cache behavior to use origin cache headers. Use AWS Lambda functions to optimize the traffic.
D. Configure an Amazon DynamoDB database to serve as the data store for the application. Create a DynamoDB Accelerator (DAX) cluster to act as the in-memory cache for DynamoDB hosting the application data.
Community vote distribution
A (100%)
Aninina Highly Voted 5 months, 2 weeks ago
A. Configure an accelerator in AWS Global Accelerator. Add a listener for the port that the application listens on, and attach it to a Regional endpoint in each Region. Add the ALB as the endpoint.
AWS Global Accelerator directs traffic to the optimal healthy endpoint based on health checks, it can also route traffic to the closest healthy endpoint based on geographic location of the client. By configuring an accelerator and attaching it to a Regional endpoint in each Region, and adding the ALB as the endpoint, the solution will redirect traffic to healthy endpoints, improving the user experience by reducing latency and ensuring that the application is running optimally. This solution will ensure that traffic is directed to the closest healthy endpoint and will help to improve the overall user experience.
upvoted 12 times
alanp Highly Voted 5 months, 2 weeks ago
A. When you have an Application Load Balancer or Network Load Balancer that includes multiple target groups, Global Accelerator considers the load balancer endpoint to be healthy only if each target group behind the load balancer has at least one healthy target. If any single target group for the load balancer has only unhealthy targets, Global Accelerator considers the endpoint to be unhealthy.
https://docs.aws.amazon.com/global-accelerator/latest/dg/about-endpoint-groups-health-check-options.html
upvoted 7 times
antropaws Most Recent 1 month ago
michellemeloc 1 month, 1 week ago
Delivery gaming content --> AWS GLOBAL ACCELERATOR
upvoted 3 times
Bhrino 4 months, 1 week ago
Global accelerators can be used for non http cases such as UDP, tcp , gaming , or voip
upvoted 4 times
mhmt4438 5 months, 2 weeks ago
AHUI 5 months, 2 weeks ago
A:
upvoted 1 times
Morinator 5 months, 2 weeks ago
https://docs.aws.amazon.com/global-accelerator/latest/dg/about-endpoint-groups-health-check-options.html
upvoted 1 times
Question #267 Topic 1
A company has one million users that use its mobile app. The company must analyze the data usage in near-real time. The company also must encrypt the data in near-real time and must store the data in a centralized location in Apache Parquet format for further processing.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an Amazon Kinesis data stream to store the data in Amazon S3. Create an Amazon Kinesis Data Analytics application to analyze the data. Invoke an AWS Lambda function to send the data to the Kinesis Data Analytics application.
B. Create an Amazon Kinesis data stream to store the data in Amazon S3. Create an Amazon EMR cluster to analyze the data. Invoke an AWS Lambda function to send the data to the EMR cluster.
C. Create an Amazon Kinesis Data Firehose delivery stream to store the data in Amazon S3. Create an Amazon EMR cluster to analyze the data.
D. Create an Amazon Kinesis Data Firehose delivery stream to store the data in Amazon S3. Create an Amazon Kinesis Data Analytics application to analyze the data.
Community vote distribution
D (100%)
mhmt4438 Highly Voted 5 months, 2 weeks ago
D. Create an Amazon Kinesis Data Firehose delivery stream to store the data in Amazon S3. Create an Amazon Kinesis Data Analytics application to analyze the data.
This solution will meet the requirements with the least operational overhead as it uses Amazon Kinesis Data Firehose, which is a fully managed service that can automatically handle the data collection, data transformation, encryption, and data storage in near-real time. Kinesis Data Firehose can automatically store the data in Amazon S3 in Apache Parquet format for further processing. Additionally, it allows you to create an Amazon Kinesis Data Analytics application to analyze the data in near real-time, with no need to manage any infrastructure or invoke any Lambda function. This way you can process a large amount of data with the least operational overhead.
upvoted 24 times
antropaws 1 month ago
https://aws.amazon.com/blogs/big-data/analyzing-apache-parquet-optimized-data-using-amazon-kinesis-data-firehose-amazon-athena-and-amazon-redshift/
upvoted 1 times
WherecanIstart 3 months, 2 weeks ago
Thanks for the explanation!
upvoted 1 times
jainparag1 5 months, 1 week ago
Nicely explained. Thanks.
upvoted 1 times
LuckyAro 5 months, 1 week ago
Apache Parquet format processing was not mentioned in the answer options. Strange.
upvoted 5 times
AHUI Most Recent 5 months, 2 weeks ago
D:
upvoted 1 times
Aninina 5 months, 2 weeks ago
D. Create an Amazon Kinesis Data Firehose delivery stream to store the data in Amazon S3. Create an Amazon Kinesis Data Analytics application to analyze the data.
Amazon Kinesis Data Firehose can automatically encrypt and store the data in Amazon S3 in Apache Parquet format for further processing, which reduces the operational overhead. It also allows for near-real-time data analysis using Kinesis Data Analytics, which is a fully managed service that makes it easy to analyze streaming data using SQL. This solution eliminates the need for setting up and maintaining an EMR cluster, which would require more operational overhead.
upvoted 2 times
Question #268 Topic 1
A gaming company has a web application that displays scores. The application runs on Amazon EC2 instances behind an Application Load
Balancer. The application stores data in an Amazon RDS for MySQL database. Users are starting to experience long delays and interruptions that are caused by database read performance. The company wants to improve the user experience while minimizing changes to the application’s
architecture.
What should a solutions architect do to meet these requirements?
A. Use Amazon ElastiCache in front of the database.
B. Use RDS Proxy between the application and the database.
C. Migrate the application from EC2 instances to AWS Lambda.
D. Migrate the database from Amazon RDS for MySQL to Amazon DynamoDB.
Community vote distribution
B (60%) A (40%)
kraken21 Highly Voted 2 months, 4 weeks ago
RDX proxy will :"improve the user experience while minimizing changes".
upvoted 7 times
Steve_4542636 Highly Voted 3 months, 4 weeks ago
Rds proxy is for too many connections, not for performance
upvoted 7 times
vipyodha 5 days, 19 hours ago
to use elasticache , you need to perform heavy code change ,aand also elasticache do chaching that can improve read perfromance but will not provide scalability
upvoted 1 times
Yadav_Sanjay 1 month, 1 week ago
Can't use cache as score gates updated. If data would have been static then definitely can go with A. But here score is dynamic...
upvoted 2 times
rfelipem 4 weeks ago
Users are starting to experience long delays and interruptions caused by the "read performance" of the database... While the score is dynamic, there is also read activity in the DB that is causing the delays and outages and this can be improved with Elastic Cache.
upvoted 2 times
jayce5 Most Recent 2 weeks ago
It is not clearly stated, but I believe that the game scores will be updated frequently. The answer should be the RDS proxy, not the cache.
upvoted 2 times
migo7 2 weeks, 2 days ago
B is correct as it requires minimum changes and A is wrong because creating the cache will require writing manual coding
upvoted 1 times
migo7 2 weeks, 2 days ago
B is correct as it requires minimum changes and A is wrong because creating the cache will require writing manual coding
upvoted 1 times
DrWatson 3 weeks, 5 days ago
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/elasticache-use-cases.html#elasticache-for-redis-use-cases-gaming https://d0.awsstatic.com/whitepapers/performance-at-scale-with-amazon-elasticache.pdf
upvoted 1 times
arjundevops 2 months, 1 week ago
RDS Proxy is a fully managed database proxy that allows applications to pool and share connections to an RDS database instance, reducing the number of connections made to the database and improving the performance of read-heavy workloads. RDS Proxy also provides features like connection pooling, query logging, and automatic failover, which can help to improve the availability and performance of the database.
By using RDS Proxy between the application and the database, the gaming company can improve the performance of the application without making significant changes to the application's architecture. RDS Proxy can help to reduce the number of connections made to the database, optimize query execution, and provide automatic failover in case of a database failure.
upvoted 4 times
the problem is not the amount of connections to the database, it's slow performance because of read operations ... this problem is solved with Elasticache, so the answer is A
upvoted 1 times
ChatGPT says B
upvoted 3 times
choose A as the issue is on db read performance
upvoted 1 times
"minimizing changes to the application’s architecture" -> B ElastiCache requires logic to handle.
upvoted 4 times
RDS proxy
upvoted 2 times
By using RDS Proxy, the application can offload the task of managing database connections and pooling from the application to the proxy. This can help reduce connection overhead, improve connection reuse, and help to reduce the overall number of connections to the database, which can lead to better performance.
Additionally, RDS Proxy has built-in read and write connection pooling, which can help to reduce latency and improve throughput for read-heavy workloads like the gaming company's web application.
Overall, using RDS Proxy is a good option for improving the user experience and database performance without making significant changes to the application's architecture.
upvoted 2 times
anyone know if A or B is the correct answer?
upvoted 1 times
B is the correct answer, A would require significant changes to the application code
upvoted 3 times
abitwrong 3 months, 2 weeks ago
Amazon RDS Proxy can be enabled for most applications with no code changes. (https://aws.amazon.com/rds/proxy/)
You can also use Amazon RDS Proxy with read-only endpoints to help you achieve read scalability of your read-heavy workloads. (https://aws.amazon.com/blogs/database/use-amazon-rds-proxy-with-read-only-endpoints/)
Elasticache can improve read performance but it relies on heavy code changes, so A is incorrect.
upvoted 3 times
It should B ,key is here to minimize application change
upvoted 1 times
correct answer is 'B' Amazon RDS Proxy, you can allow your applications to pool and share database connections to improve their ability to scale. RDS Proxy makes applications more resilient to database failures by automatically connecting to a standby DB instance while preserving application connections.
upvoted 3 times
Question #269 Topic 1
An ecommerce company has noticed performance degradation of its Amazon RDS based web application. The performance degradation is
attributed to an increase in the number of read-only SQL queries triggered by business analysts. A solutions architect needs to solve the problem with minimal changes to the existing web application.
What should the solutions architect recommend?
A. Export the data to Amazon DynamoDB and have the business analysts run their queries.
B. Load the data into Amazon ElastiCache and have the business analysts run their queries.
C. Create a read replica of the primary database and have the business analysts run their queries.
D. Copy the data into an Amazon Redshift cluster and have the business analysts run their queries.
Community vote distribution
C (100%)
antropaws 1 month ago
mhmt4438 5 months, 2 weeks ago
C is correct answer
upvoted 2 times
Aninina 5 months, 2 weeks ago
C. Create a read replica of the primary database and have the business analysts run their queries.
Creating a read replica of the primary RDS database will offload the read-only SQL queries from the primary database, which will help to improve the performance of the web application. Read replicas are exact copies of the primary database that can be used to handle read-only traffic, which will reduce the load on the primary database and improve the performance of the web application. This solution can be implemented with minimal changes to the existing web application, as the business analysts can continue to run their queries on the read replica without modifying the code.
upvoted 4 times
bamishr 5 months, 2 weeks ago
Create a read replica of the primary database and have the business analysts run their queries.
upvoted 1 times
Question #270 Topic 1
A company is using a centralized AWS account to store log data in various Amazon S3 buckets. A solutions architect needs to ensure that the data is encrypted at rest before the data is uploaded to the S3 buckets. The data also must be encrypted in transit.
Which solution meets these requirements?
A. Use client-side encryption to encrypt the data that is being uploaded to the S3 buckets.
B. Use server-side encryption to encrypt the data that is being uploaded to the S3 buckets.
C. Create bucket policies that require the use of server-side encryption with S3 managed encryption keys (SSE-S3) for S3 uploads.
D. Enable the security option to encrypt the S3 buckets through the use of a default AWS Key Management Service (AWS KMS) key.
Community vote distribution
A (100%)
techhb Highly Voted 5 months, 1 week ago
here keyword is "before" "the data is encrypted at rest before the data is uploaded to the S3 buckets."
upvoted 9 times
Abobaloyi Most Recent 1 week ago
data must be encrypted before uploaded , which means the client need to do it before uploading the data to S3
upvoted 1 times
datz 2 months, 3 weeks ago
A, would meet requirements.
upvoted 1 times
nder 4 months ago
Because the data must be encrypted while in transit
upvoted 2 times
LuckyAro 4 months, 3 weeks ago
mhmt4438 5 months, 2 weeks ago
upvoted 3 times
Aninina 5 months, 2 weeks ago
A. Use client-side encryption to encrypt the data that is being uploaded to the S3 buckets.
upvoted 1 times
bamishr 5 months, 2 weeks ago
Use client-side encryption to encrypt the data that is being uploaded to the S3 buckets
upvoted 1 times
Question #271 Topic 1
A solutions architect observes that a nightly batch processing job is automatically scaled up for 1 hour before the desired Amazon EC2 capacity is reached. The peak capacity is the ‘same every night and the batch jobs always start at 1 AM. The solutions architect needs to find a cost-effective solution that will allow for the desired EC2 capacity to be reached quickly and allow the Auto Scaling group to scale down after the batch jobs are complete.
What should the solutions architect do to meet these requirements?
A. Increase the minimum capacity for the Auto Scaling group.
B. Increase the maximum capacity for the Auto Scaling group.
C. Configure scheduled scaling to scale up to the desired compute level.
D. Change the scaling policy to add more EC2 instances during each scaling operation.
Community vote distribution
C (100%)
david76x Highly Voted 5 months, 1 week ago
C is correct. Goodluck everybody!
upvoted 7 times
ManOnTheMoon Highly Voted 4 months, 2 weeks ago
GOOD LUCK EVERYONE :) YOU CAN DO THIS
upvoted 6 times
qacollin Most Recent 2 months, 1 week ago
just scheduled my exam :)
upvoted 2 times
awscerts023 4 months, 2 weeks ago
Reached here ! Did anyone schedule the real exam now ? How was it ?
upvoted 3 times
pal40sg 4 months, 2 weeks ago
Thanks to everyone who contributed with answers :)
upvoted 3 times
ProfXsamson 4 months, 3 weeks ago
C. I'm here at the end, leaving this here for posterity sake 02/01/2023.
upvoted 3 times
dedline 5 months ago
GL ALL!
upvoted 3 times
mhmt4438 5 months, 2 weeks ago
upvoted 1 times
Aninina 5 months, 2 weeks ago
C. Configure scheduled scaling to scale up to the desired compute level.
By configuring scheduled scaling, the solutions architect can set the Auto Scaling group to automatically scale up to the desired compute level at a specific time (1AM) when the batch job starts and then automatically scale down after the job is complete. This will allow the desired EC2 capacity to be reached quickly and also help in reducing the cost.
upvoted 4 times
bamishr 5 months, 2 weeks ago
Configure scheduled scaling to scale up to the desired compute level.
upvoted 1 times
Morinator 5 months, 2 weeks ago
predictable = schedule scaling
upvoted 3 times
Question #272 Topic 1
A company serves a dynamic website from a fleet of Amazon EC2 instances behind an Application Load Balancer (ALB). The website needs to support multiple languages to serve customers around the world. The website’s architecture is running in the us-west-1 Region and is exhibiting high request latency for users that are located in other parts of the world.
The website needs to serve requests quickly and efficiently regardless of a user’s location. However, the company does not want to recreate the existing architecture across multiple Regions.
What should a solutions architect do to meet these requirements?
A. Replace the existing architecture with a website that is served from an Amazon S3 bucket. Configure an Amazon CloudFront distribution with the S3 bucket as the origin. Set the cache behavior settings to cache based on the Accept-Language request header.
B. Configure an Amazon CloudFront distribution with the ALB as the origin. Set the cache behavior settings to cache based on the Accept-Language request header.
C. Create an Amazon API Gateway API that is integrated with the ALB. Configure the API to use the HTTP integration type. Set up an API Gateway stage to enable the API cache based on the Accept-Language request header.
D. Launch an EC2 instance in each additional Region and configure NGINX to act as a cache server for that Region. Put all the EC2 instances and the ALB behind an Amazon Route 53 record set with a geolocation routing policy.
Community vote distribution
B (100%)
Yechi Highly Voted 4 months, 1 week ago
Configuring caching based on the language of the viewer
If you want CloudFront to cache different versions of your objects based on the language specified in the request, configure CloudFront to forward the Accept-Language header to your origin.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/header-caching.html
upvoted 6 times
kraken21 Most Recent 2 months, 4 weeks ago
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/header-caching.html#header-caching-web-language
upvoted 1 times
vherman 3 months, 3 weeks ago
Steve_4542636 3 months, 4 weeks ago
LuckyAro 4 months, 1 week ago
B is the correct answer
upvoted 1 times
Question #273 Topic 1
A rapidly growing ecommerce company is running its workloads in a single AWS Region. A solutions architect must create a disaster recovery (DR) strategy that includes a different AWS Region. The company wants its database to be up to date in the DR Region with the least possible latency. The remaining infrastructure in the DR Region needs to run at reduced capacity and must be able to scale up if necessary.
Which solution will meet these requirements with the LOWEST recovery time objective (RTO)?
A. Use an Amazon Aurora global database with a pilot light deployment.
B. Use an Amazon Aurora global database with a warm standby deployment.
C. Use an Amazon RDS Multi-AZ DB instance with a pilot light deployment.
D. Use an Amazon RDS Multi-AZ DB instance with a warm standby deployment.
Community vote distribution
B (96%) 4%
nickolaj Highly Voted 4 months, 1 week ago
Option A is incorrect because while Amazon Aurora global database is a good solution for disaster recovery, pilot light deployment provides only a minimalistic setup and would require manual intervention to make the DR Region fully operational, which increases the recovery time.
Option B is a better choice than Option A as it provides a warm standby deployment, which is an automated and more scalable setup than pilot light deployment. In this setup, the database is replicated to the DR Region, and the standby instance can be brought up quickly in case of a disaster.
Option C is incorrect because Multi-AZ DB instances provide high availability, not disaster recovery.
Option D is a good choice for high availability, but it does not meet the requirement for DR in a different region with the least possible latency.
upvoted 13 times
Yechi Highly Voted 4 months, 1 week ago
Note: The difference between pilot light and warm standby can sometimes be difficult to understand. Both include an environment in your DR Region with copies of your primary Region assets. The distinction is that pilot light cannot process requests without additional action taken first, whereas warm standby can handle traffic (at reduced capacity levels) immediately. The pilot light approach requires you to “turn on” servers, possibly deploy additional (non-core) infrastructure, and scale up, whereas warm standby only requires you to scale up (everything is already deployed and running). Use your RTO and RPO needs to help you choose between these approaches.
https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html
upvoted 11 times
krisfromtw Most Recent 4 months, 1 week ago
leoattf 4 months ago
No, my friend. The question asks for deployment in another Region. Hence, it cannot be C or D.
The answer is B because is Global (different regions) and Ward Standby has faster RTO than Pilot Light.
upvoted 7 times
Question #274 Topic 1
A company runs an application on Amazon EC2 instances. The company needs to implement a disaster recovery (DR) solution for the application. The DR solution needs to have a recovery time objective (RTO) of less than 4 hours. The DR solution also needs to use the fewest possible AWS resources during normal operations.
Which solution will meet these requirements in the MOST operationally efficient way?
A. Create Amazon Machine Images (AMIs) to back up the EC2 instances. Copy the AMIs to a secondary AWS Region. Automate infrastructure deployment in the secondary Region by using AWS Lambda and custom scripts.
B. Create Amazon Machine Images (AMIs) to back up the EC2 instances. Copy the AMIs to a secondary AWS Region. Automate infrastructure deployment in the secondary Region by using AWS CloudFormation.
C. Launch EC2 instances in a secondary AWS Region. Keep the EC2 instances in the secondary Region active at all times.
D. Launch EC2 instances in a secondary Availability Zone. Keep the EC2 instances in the secondary Availability Zone active at all times.
Community vote distribution
B (100%)
NolaHOla Highly Voted 4 months, 1 week ago
Guys, sorry but I don't really have time to deepdive as my exam is soon. Based on chatGPT and my previous study the answer should be B "Create Amazon Machine Images (AMIs) to back up the EC2 instances. Copy the AMIs to a secondary AWS Region. Automate infrastructure deployment in the secondary Region by using AWS CloudFormation," would likely be the most suitable solution for the given requirements.
This option allows for the creation of Amazon Machine Images (AMIs) to back up the EC2 instances, which can then be copied to a secondary AWS region to provide disaster recovery capabilities. The infrastructure deployment in the secondary region can be automated using AWS CloudFormation, which can help to reduce the amount of time and resources needed for deployment and management.
upvoted 6 times
SimiTik Most Recent 2 months, 1 week ago
C may satisfy the requirement of using the fewest possible AWS resources during normal operations, it may not be the most operationally efficient or cost-effective solution in the long term.
upvoted 2 times
AlmeroSenior 4 months ago
So Weird , they have product for this > Elastic Disaster Recovery , but option is not given .
upvoted 1 times
Yechi 4 months, 1 week ago
https://docs.aws.amazon.com/zh_cn/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html#backup-and-restore
upvoted 3 times
nickolaj 4 months, 1 week ago
Option B would be the most operationally efficient solution for implementing a DR solution for the application, meeting the requirement of an RTO of less than 4 hours and using the fewest possible AWS resources during normal operations.
By creating Amazon Machine Images (AMIs) to back up the EC2 instances and copying them to a secondary AWS Region, the company can ensure that they have a reliable backup in the event of a disaster. By using AWS CloudFormation to automate infrastructure deployment in the secondary Region, the company can minimize the amount of time and effort required to set up the DR solution.
upvoted 4 times
Joan111edu 4 months, 1 week ago
the answer should be B
--->recovery time objective (RTO) of less than 4 hours.
https://docs.aws.amazon.com/zh_cn/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html#backup-and-restore
upvoted 3 times
Question #275 Topic 1
A company runs an internal browser-based application. The application runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. The Auto Scaling group scales up to 20 instances during work hours, but scales down to 2 instances overnight. Staff are complaining that the application is very slow when the day begins, although it runs well by mid-morning.
How should the scaling be changed to address the staff complaints and keep costs to a minimum?
A. Implement a scheduled action that sets the desired capacity to 20 shortly before the office opens.
B. Implement a step scaling action triggered at a lower CPU threshold, and decrease the cooldown period.
C. Implement a target tracking action triggered at a lower CPU threshold, and decrease the cooldown period.
D. Implement a scheduled action that sets the minimum and maximum capacity to 20 shortly before the office opens.
Community vote distribution
C (71%) A (29%)
asoli Highly Voted 3 months, 2 weeks ago
At first, I thought the answer is A. But it is C.
It seems that there is no information in the question about CPU or Memory usage.
So, we might think the answer is A. why? because what we need is to have the required (desired) number of instances. It already has scheduled scaling that works well in this scenario. Scale down after working hours and scale up in working hours. So, it just needs to adjust the desired number to start from 20 instances.
But here is the point it shows A is WRONG!!!
If it started with desired 20 instances, it will keep it for the whole day. What if the load is reduced? We do not need to keep the 20 instances always. That 20 is the MAXIMUM number we need, no the DESIRE number. So it is against COST that is the main objective of this question.
So, the answer is C
upvoted 10 times
mandragon 1 month, 2 weeks ago
If it stars with 20 instances it will not keep it all day. It will scale down based on demand. The scheduled action in Option A simply ensures that there are enough instances running to handle the increased traffic when the day begins, while still allowing the Auto Scaling group to scale up or down based on demand during the rest of the day. https://docs.aws.amazon.com/autoscaling/ec2/userguide/scale-your-group.html
upvoted 3 times
DrWatson Most Recent 3 weeks, 5 days ago
https://docs.aws.amazon.com/autoscaling/ec2/userguide/consolidated-view-of-warm-up-and-cooldown-settings.html DefaultCooldown
Only needed if you use simple scaling policies.
API operation: CreateAutoScalingGroup, UpdateAutoScalingGroup
The amount of time, in seconds, between one scaling activity ending and another one starting due to simple scaling policies. For more information, see Scaling cooldowns for Amazon EC2 Auto Scaling (https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-scaling-cooldowns.html)
Default: 300 seconds.
upvoted 1 times
Konb 1 month, 1 week ago
I think the "cost" part that talks against A is a catch. No information why the EC2s are slow - maybe it's not CPU?
On the other hand we know that "Auto Scaling group scales up to 20 instances during work hours". A seems to be the only option that kinda satisfies requirements.
upvoted 1 times
xmark443 1 month, 1 week ago
There may be days when the demand is lower. So schedule scaling is more cost than target tracking.
upvoted 1 times
justhereforccna 1 month, 2 weeks ago
Have to go with A on this one
upvoted 1 times
This option will scale up capacity faster in the morning to improve performance, but will still allow capacity to scale down during off hours. It achieves this as follows:
A target tracking action scales based on a CPU utilization target. By triggering at a lower CPU threshold in the morning, the Auto Scaling group will start scaling up sooner as traffic ramps up, launching instances before utilization gets too high and impacts performance.
Decreasing the cooldown period allows Auto Scaling to scale more aggressively, launching more instances faster until the target is reached. This speeds up the ramp-up of capacity.
However, unlike a scheduled action to set a fixed minimum/maximum capacity, with target tracking the group can still scale down during off hours based on demand. This helps minimize costs.
upvoted 2 times
Dr_Chomp 2 months, 2 weeks ago
I'm going with A - it tells us that 20 instances is the normal capacity during the work day - so scheduling that at the start of the work day means you don't need to put load on the system to trigger scale-out. So this is like a warm start. Cool down has nothing to do with anything and it doesn't mention anything about CPU/resources for target setting.
upvoted 1 times
kraken21 2 months, 4 weeks ago
How should the scaling be changed to address the staff complaints and keep costs to a minimum? "Option C" scaling based on metrics and with the combination of reducing the cooldown the cost part is addressed.
upvoted 1 times
I will go with A based on this "The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. The Auto Scaling group scales up to 20 instances during work hours, but scales down to 2 instances overnight."
Setting the instances to 20 before the office hours start should address the issue.
upvoted 1 times
kraken21 2 months, 4 weeks ago
How about the cost part :"How should the scaling be changed to address the staff complaints and keep costs to a minimum?". By scaling to 20 instances you are abusing instance cost. C is a better option.
upvoted 1 times
FourOfAKind 3 months, 3 weeks ago
With step scaling and simple scaling, you choose scaling metrics and threshold values for the CloudWatch alarms that invoke the scaling process. You also define how your Auto Scaling group should be scaled when a threshold is in breach for a specified number of evaluation periods.
We strongly recommend that you use a target tracking scaling policy to scale on a metric like average CPU utilization or the RequestCountPerTarget metric from the Application Load Balancer. https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-simple-step.html
upvoted 1 times
I vote for A
The desired capacity does not statically fix the size of the group.
Desired capacity: Represents the **initial capacity** of the Auto Scaling group at the time of creation. An Auto Scaling group attempts to maintain the desired capacity. It starts by launching the number of instances that are specified for the desired capacity, and maintains this number of instances **as long as there are no scaling policies** or scheduled actions attached to the Auto Scaling group. https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-capacity-limits.html
upvoted 2 times
C:
target tracking may be a better option for ensuring the application remains responsive during high-traffic periods while also minimizing costs during periods of low usage. The target tracking can be used without CloudWatch alarms, as it relies on CloudWatch metrics directly.
upvoted 1 times
Between closing and opening times there'll be enough "cooling down" period if necessary, however, I don't see it's relationship with the solution.
upvoted 1 times
I would personally go for C, Implementing a target tracking scaling policy would allow the Auto Scaling group to adjust its capacity in response to changes in demand while keeping the specified metric at the target value
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-strategies.html
Option A is not the best solution because it sets the desired capacity to 20 shortly before the office opens, but it does not take into account the actual demand of the application. This means that the company will be paying for 20 instances all the time, even during the off-hours, which will result in unnecessary costs. Additionally, there may be days when the demand is lower or higher than expected, so it is not a scalable solution.
upvoted 3 times
Rocky2023 4 months, 1 week ago
How is decreasing cooldown related to question?
upvoted 1 times
leoattf 4 months ago
I think because by decreasing the cooldown, the scale up and down will be more sensitive, more in "real time" I would say.
upvoted 1 times
NolaHOla 4 months, 1 week ago
Honestly not completely sure, but the rest of the options either don't think for the MOST Cost effective solution (as when directly placed on 20 this will generate cost|) or are irrelevant
upvoted 1 times
zTopic 4 months, 1 week ago
Question #276 Topic 1
A company has a multi-tier application deployed on several Amazon EC2 instances in an Auto Scaling group. An Amazon RDS for Oracle instance is the application’ s data layer that uses Oracle-specific PL/SQL functions. Traffic to the application has been steadily increasing. This is causing the EC2 instances to become overloaded and the RDS instance to run out of storage. The Auto Scaling group does not have any scaling metrics and defines the minimum healthy instance count only. The company predicts that traffic will continue to increase at a steady but unpredictable
rate before leveling off.
What should a solutions architect do to ensure the system can automatically scale for the increased traffic? (Choose two.)
Configure storage Auto Scaling on the RDS for Oracle instance.
Migrate the database to Amazon Aurora to use Auto Scaling storage.
Configure an alarm on the RDS for Oracle instance for low free storage space.
Configure the Auto Scaling group to use the average CPU as the scaling metric.
Configure the Auto Scaling group to use the average free memory as the scaling metric.
Community vote distribution
AD (91%) 9%
klayytech Highly Voted 2 months, 4 weeks ago
Configure storage Auto Scaling on the RDS for Oracle instance.
= Makes sense. With RDS Storage Auto Scaling, you simply set your desired maximum storage limit, and Auto Scaling takes care of the rest.
Migrate the database to Amazon Aurora to use Auto Scaling storage.
= Scenario specifies application's data layer uses Oracle-specific PL/SQL functions. This rules out migration to Aurora.
Configure an alarm on the RDS for Oracle instance for low free storage space.
= You could do this but what does it fix? Nothing. The CW notification isn't going to trigger anything.
Configure the Auto Scaling group to use the average CPU as the scaling metric.
= Makes sense. The CPU utilization is the precursor to the storage outage. When the ec2 instances are overloaded, the RDS instance storage hits its limits, too.
upvoted 10 times
kruasan Most Recent 2 months ago
These options will allow the system to scale both the compute tier (EC2 instances) and the data tier (RDS storage) automatically as traffic increases:
A. Storage Auto Scaling will allow the RDS for Oracle instance to automatically increase its allocated storage when free storage space gets low. This ensures the database does not run out of capacity and can continue serving data to the application.
D. Configuring the EC2 Auto Scaling group to scale based on average CPU utilization will allow it to launch additional instances automatically as traffic causes higher CPU levels across the instances. This scales the compute tier to handle increased demand.
upvoted 2 times
kraken21 2 months, 4 weeks ago
Auto scaling storage RDS will ease storage issues and migrating Oracle Pl/Sql to Aurora is cumbersome. Also Aurora has auto storage scaling by default.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.StorageTypes.html#USER_PIOPS.Autoscaling
upvoted 2 times
Nel8 3 months, 4 weeks ago
My answer is B & D...
B. Migrate the database to Amazon Aurora to use Auto Scaling Storage. --- Aurora storage is also self-healing. Data blocks and disks are continuously scanned for errors and repaired automatically.
D. Configurate the Auto Scaling group to sue the average CPU as the scaling metric. -- Good choice.
I believe either A & C or B & D options will work.
upvoted 2 times
FourOfAKind 3 months, 3 weeks ago
In this question, you have Oracle DB, and Amazon Aurora is for MySQL/PostgreSQL. A and D are the correct choices.
upvoted 5 times
dcp 3 months, 1 week ago
You can migrate Oracle PL/SQL to Aurora:
https://docs.aws.amazon.com/dms/latest/oracle-to-aurora-mysql-migration-playbook/chap-oracle-aurora-mysql.sql.html
upvoted 1 times
dcp 3 months, 1 week ago
I still think A is the answer, because RDS for Oracle auto scaling once enabled it will automatically adjust the storage capacity.
upvoted 1 times
Ja13 4 months ago
a and d
upvoted 3 times
KZM 4 months, 1 week ago
A and D.
upvoted 2 times
GwonLEE 4 months, 1 week ago
a and d
upvoted 2 times
LuckyAro 4 months, 1 week ago
A and D
upvoted 1 times
Joan111edu 4 months, 1 week ago
upvoted 1 times
ChrisG1454 4 months, 1 week ago
answer is A and D
upvoted 1 times
ChrisG1454 4 months, 1 week ago
https://www.examtopics.com/discussions/amazon/view/46534-exam-aws-certified-solutions-architect-associate-saa-c02/#:~:text=%22This%20overloads%20the%20EC2%20instances%20and%20causes%20the,the%20RDS%20for%20Oracle%20instance%20upvo ted%202%20times
upvoted 1 times
rrharris 4 months, 1 week ago A and D are the Answers upvoted 1 times
Question #277 Topic 1
A company provides an online service for posting video content and transcoding it for use by any mobile platform. The application architecture
uses Amazon Elastic File System (Amazon EFS) Standard to collect and store the videos so that multiple Amazon EC2 Linux instances can access the video content for processing. As the popularity of the service has grown over time, the storage costs have become too expensive.
Which storage solution is MOST cost-effective?
A. Use AWS Storage Gateway for files to store and process the video content.
B. Use AWS Storage Gateway for volumes to store and process the video content.
C. Use Amazon EFS for storing the video content. Once processing is complete, transfer the files to Amazon Elastic Block Store (Amazon EBS).
D. Use Amazon S3 for storing the video content. Move the files temporarily over to an Amazon Elastic Block Store (Amazon EBS) volume attached to the server for processing.
Community vote distribution
D (71%) A (29%)
bdp123 Highly Voted 4 months, 1 week ago
Storage gateway is not used for storing content - only to transfer to the Cloud
upvoted 12 times
Brak Highly Voted 3 months, 3 weeks ago
It can't be D, since there are multiple servers accessing the video files which rules out EBS. File Gateway provides a shared filesystem to replace EFS, but uses S3 for storage to reduce costs.
upvoted 5 times
smartegnine Most Recent 1 week, 4 days ago
The result should be A.
Amazon storage gateway has 4 types, S3 File Gateway, FSx file gateway, Type Gateway and Volume Gateway.
If not specific reference file gateway should be default as S3 gateway, which sent file over to S3 the most cost effective storage in AWS.
Why not D, the reason is last sentence, there are multiple EC2 servers for processing the video and EBS can only attach to 1 EC2 instance at a time, so if you use EBS, which mean for each EC2 instance you will have 1 EBS. This rule out D.
upvoted 1 times
RainWhisper 4 days, 21 hours ago
AWS Storage Gateway = extend storage to onprem
upvoted 1 times
MostafaWardany 3 weeks ago
D: MOST cost-effective of these options = S3
upvoted 1 times
omoakin 1 month ago
CCCCCCCCCCCCCCC
upvoted 1 times
kruasan 2 months ago
he most cost-effective storage solution in this scenario would be:
D. Use Amazon S3 for storing the video content. Move the files temporarily over to an Amazon Elastic Block Store (Amazon EBS) volume attached to the server for processing.
This option provides the lowest-cost storage by using:
Amazon S3 for large-scale, durable, and inexpensive storage of the video content. S3 storage costs are significantly lower than EFS.
Amazon EBS only temporarily during processing. By mounting an EBS volume only when a video needs to be processed, and unmounting it after, the time the content spends on the higher-cost EBS storage is minimized.
The EBS volume can be sized to match the workload needs for active processing, keeping costs lower. The volume does not need to store the entire video library long-term.
upvoted 1 times
kraken21 2 months, 4 weeks ago
There is no on-prem/non Aws infrastructure to create a gateway. Also, EFS+EBS is more expensive that EFS and S3. So D is the best option.
upvoted 4 times
Option A, which uses AWS Storage Gateway for files to store and process the video content, would be the most cost-effective solution.
With this approach, you would use an AWS Storage Gateway file gateway to access the video content stored in Amazon S3. The file gateway presents a file interface to the EC2 instances, allowing them to access the video content as if it were stored on a local file system. The video processing tasks can be performed on the EC2 instances, and the processed files can be stored back in S3.
This approach is cost-effective because it leverages the lower cost of Amazon S3 for storage while still allowing for easy access to the video content from the EC2 instances using a file interface. Additionally, Storage Gateway provides caching capabilities that can further improve performance by reducing the need to access S3 directly.
upvoted 1 times
Selected Answer: A
Amazon S3 File gateway is using S3 behind the scene. https://docs.aws.amazon.com/filegateway/latest/files3/what-is-file-s3.html
upvoted 1 times
CapJackSparrow 3 months, 2 weeks ago
Amazon S3 File Gateway
Amazon S3 File Gateway presents a file interface that enables you to store files as objects in Amazon S3 using the industry-standard NFS and SMB file protocols, and access those files via NFS and SMB from your data center or Amazon EC2, or access those files as objects directly in Amazon S3. POSIX-style metadata, including ownership, permissions, and timestamps are durably stored in Amazon S3 in the user-metadata of the object associated with the file. Once objects are transferred to S3, they can be managed as native S3 objects and bucket policies such as lifecycle management and Cross-Region Replication (CRR), and can be applied directly to objects stored in your bucket. Amazon S3 File Gateway also publishes audit logs for SMB file share user operations to Amazon CloudWatch.
Customers can use Amazon S3 File Gateway to back up on-premises file data as objects in Amazon S3 (including Microsoft SQL Server and Oracle databases and logs), and for hybrid cloud workflows using data generated by on-premises applications for processing by AWS services such as machine learning or big data analytics.
upvoted 1 times
Using Amazon S3 for storing video content is the best way for cost-effectiveness I think. But I am still confused about why moved the data to EBS.
upvoted 2 times
A better solution would be to use a transcoding service like Amazon Elastic Transcoder to process the video content directly from Amazon S3. This would eliminate the need for storing the content on an EBS volume, reduce storage costs, and simplify the architecture by removing the need for managing EBS volumes.
upvoted 2 times
AlmeroSenior 4 months, 1 week ago
A looks right . File Gateway is S3 , but exposes it as NFS/SMB . So no need for costly retrieval like option D , or C consuming expensive EBS .
upvoted 2 times
AlmeroSenior 4 months, 1 week ago
A looks right . File Gateway is S3 , but exposes it as NFS/SMB . So no need for costly retrieval like option D , or C consuming expensive EBS .
upvoted 1 times
Can someone please explain or provide information why not C? If we go with option D it states that we store the Content in S3 which is indeed cheaper, but then we move them to EBS for processing, how are multiple Linux instances, gonna process the same videos from EBS when they can't read them simultaneously.
Where for Option C, we indeed keep the EFS, then we process from there and move them to EBS for reading? seems more logical to me
upvoted 1 times
EFS has a lower cost than EBS in general. So, moving from EFS to EBS will not reduce cost
upvoted 1 times
Use Amazon S3 for storing the video content. Move the files temporarily over to an Amazon Elastic Block Store (Amazon EBS) volume attached to the server for processing.
upvoted 2 times
rrharris 4 months, 1 week ago Most Cost Effective is S3 upvoted 4 times
Question #278 Topic 1
A company wants to create an application to store employee data in a hierarchical structured relationship. The company needs a minimum-latency response to high-traffic queries for the employee data and must protect any sensitive data. The company also needs to receive monthly email
messages if any financial information is present in the employee data.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)
Use Amazon Redshift to store the employee data in hierarchies. Unload the data to Amazon S3 every month.
Use Amazon DynamoDB to store the employee data in hierarchies. Export the data to Amazon S3 every month.
Configure Amazon Macie for the AWS account. Integrate Macie with Amazon EventBridge to send monthly events to AWS Lambda.
Use Amazon Athena to analyze the employee data in Amazon S3. Integrate Athena with Amazon QuickSight to publish analysis dashboards and share the dashboards with users.
Configure Amazon Macie for the AWS account. Integrate Macie with Amazon EventBridge to send monthly notifications through an Amazon Simple Notification Service (Amazon SNS) subscription.
Community vote distribution
BE (100%)
Bhawesh Highly Voted 4 months, 1 week ago
Data in hierarchies : Amazon DynamoDB
B. Use Amazon DynamoDB to store the employee data in hierarchies. Export the data to Amazon S3 every month.
Sensitive Info: Amazon Macie
E. Configure Amazon Macie for the AWS account. Integrate Macie with Amazon EventBridge to send monthly notifications through an Amazon Simple Notification Service (Amazon SNS) subscription.
upvoted 9 times
gold4otas 3 months ago
Can someone please provide explanation why options "B" & "C" are the correct options?
upvoted 1 times
smartegnine 1 week, 4 days ago
C is half statement once event sent to Lambda what is next? Should send email right, but it does not say it.
upvoted 1 times
cesargalindo123 Most Recent 1 week ago
AE
https://aws.amazon.com/es/blogs/big-data/query-hierarchical-data-models-within-amazon-redshift/
upvoted 1 times
kruasan 2 months ago
, the combination of DynamoDB for fast data queries, S3 for durable storage and backups, Macie for sensitive data monitoring, and EventBridge + SNS for email notifications satisfies all needs: fast query response, sensitive data protection, and monthly alerts. The solutions architect should implement DynamoDB with export to S3, and configure Macie with integration to send SNS email notifications.
upvoted 1 times
kruasan 2 months ago
Generally, for building a hierarchical relationship model, a graph database such as Amazon Neptune is a better choice. In some cases, however, DynamoDB is a better choice for hierarchical data modeling because of its flexibility, security, performance, and scale.
https://docs.aws.amazon.com/prescriptive-guidance/latest/dynamodb-hierarchical-data-model/introduction.html
upvoted 2 times
darn 2 months ago
why Dynamo and not Redshift?
upvoted 1 times
kruasan 2 months ago
Hierarchical data - DynamoDB supports hierarchical (nested) data structures well in a NoSQL data model. Defining hierarchical employee data may be more complex in Redshift's columnar SQL data warehouse structure. DynamoDB is built around flexible data schemas that can represent complex relationships.
Data export - Both DynamoDB and Redshift allow exporting data to S3, so that requirement could be met with either service. However, overall DynamoDB is the better fit based on the points above regarding latency, scalability, and support for hierarchical data.
upvoted 3 times
kruasan 2 months ago
Low latency - DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with single-digit millisecond latency. Redshift is a data warehouse solution optimized for complex analytical queries, so query latency would typically be higher. Since the requirements specify minimum latency for high-traffic queries, DynamoDB is better suited.
Scalability - DynamoDB is highly scalable, able to handle very high read and write throughput with no downtime. Redshift also scales, but may experience some downtime during rescale operations. For a high-traffic application, DynamoDB's scalability and availability are better matched.
upvoted 2 times
PRASAD180 4 months ago
BE is crt 100%
upvoted 1 times
KZM 4 months, 1 week ago
B and E
To send monthly email messages, an SNS service is required.
upvoted 2 times
skiwili 4 months, 1 week ago
B and E
upvoted 3 times
Question #279 Topic 1
A company has an application that is backed by an Amazon DynamoDB table. The company’s compliance requirements specify that database backups must be taken every month, must be available for 6 months, and must be retained for 7 years.
Which solution will meet these requirements?
Create an AWS Backup plan to back up the DynamoDB table on the first day of each month. Specify a lifecycle policy that transitions the backup to cold storage after 6 months. Set the retention period for each backup to 7 years.
Create a DynamoDB on-demand backup of the DynamoDB table on the first day of each month. Transition the backup to Amazon S3 Glacier Flexible Retrieval after 6 months. Create an S3 Lifecycle policy to delete backups that are older than 7 years.
Use the AWS SDK to develop a script that creates an on-demand backup of the DynamoDB table. Set up an Amazon EventBridge rule that runs the script on the first day of each month. Create a second script that will run on the second day of each month to transition DynamoDB backups that are older than 6 months to cold storage and to delete backups that are older than 7 years.
Use the AWS CLI to create an on-demand backup of the DynamoDB table. Set up an Amazon EventBridge rule that runs the command on the first day of each month with a cron expression. Specify in the command to transition the backups to cold storage after 6 months and to delete the backups after 7 years.
Community vote distribution
A (100%)
kruasan 2 months ago
This solution satisfies the requirements in the following ways:
AWS Backup will automatically take full backups of the DynamoDB table on the schedule defined in the backup plan (the first of each month).
The lifecycle policy can transition backups to cold storage after 6 months, meeting that requirement.
Setting a 7-year retention period in the backup plan will ensure each backup is retained for 7 years as required.
AWS Backup manages the backup jobs and lifecycle policies, requiring no custom scripting or management.
upvoted 2 times
TariqKipkemei 3 months ago
Answer is A
upvoted 1 times
mmustafa4455 3 months, 1 week ago
The correct Answer is A
https://aws.amazon.com/blogs/database/set-up-scheduled-backups-for-amazon-dynamodb-using-aws-backup/
upvoted 1 times
mmustafa4455 3 months, 1 week ago
Its B.
https://aws.amazon.com/blogs/database/set-up-scheduled-backups-for-amazon-dynamodb-using-aws-backup/
upvoted 1 times
Wael216 4 months, 1 week ago
LuckyAro 4 months, 1 week ago
skiwili 4 months, 1 week ago
A is the correct answe
upvoted 1 times
rrharris 4 months, 1 week ago
A is the Answer
can be used to create backup schedules and retention policies for DynamoDB tables
upvoted 2 times
kpato87 4 months, 1 week ago
Create an AWS Backup plan to back up the DynamoDB table on the first day of each month. Specify a lifecycle policy that transitions the backup to cold storage after 6 months. Set the retention period for each backup to 7 years.
upvoted 3 times
Question #280 Topic 1
A company is using Amazon CloudFront with its website. The company has enabled logging on the CloudFront distribution, and logs are saved in one of the company’s Amazon S3 buckets. The company needs to perform advanced analyses on the logs and build visualizations.
What should a solutions architect do to meet these requirements?
Use standard SQL queries in Amazon Athena to analyze the CloudFront logs in the S3 bucket. Visualize the results with AWS Glue.
Use standard SQL queries in Amazon Athena to analyze the CloudFront logs in the S3 bucket. Visualize the results with Amazon QuickSight.
Use standard SQL queries in Amazon DynamoDB to analyze the CloudFront logs in the S3 bucket. Visualize the results with AWS Glue.
Use standard SQL queries in Amazon DynamoDB to analyze the CloudFront logs in the S3 bucket. Visualize the results with Amazon QuickSight.
Community vote distribution
B (86%) 14%
rrharris Highly Voted 4 months, 1 week ago
Answer is B - Quicksite creating data visualizations
https://docs.aws.amazon.com/quicksight/latest/user/welcome.html
upvoted 5 times
ajay258 Most Recent 1 month, 1 week ago
Answer is B
upvoted 1 times
FFO 2 months, 2 weeks ago
Athena and Quicksight. Glue is for ETL transformation
upvoted 1 times
TariqKipkemei 3 months ago
Answer is B
Analysis on S3 = Athena Visualizations = Quicksight
upvoted 1 times
GalileoEC2 3 months ago
Why the Hell A?
upvoted 1 times
GalileoEC2 3 months, 1 week ago
Why A! as far as I know Glue is not used for visualization
upvoted 1 times
Bhrino 4 months, 1 week ago
B because athena can be used to analyse data in s3 buckets and AWS quicksight is literally used to create visual representation of data
upvoted 1 times
LuckyAro 4 months, 1 week ago
Using Athena to query the CloudFront logs in the S3 bucket and QuickSight to visualize the results is the best solution because it is cost-effective, scalable, and requires no infrastructure setup. It also provides a robust solution that enables the company to perform advanced analysis and build interactive visualizations without the need for a dedicated team of developers.
upvoted 1 times
skiwili 4 months, 1 week ago
obatunde 4 months, 1 week ago
Correct answer should be B.
upvoted 1 times
Namrash 4 months, 1 week ago
B is correct
upvoted 1 times
kpato87 4 months, 1 week ago
Amazon Athena can be used to analyze data in S3 buckets using standard SQL queries without requiring any data transformation. By using Athena, a solutions architect can easily and efficiently query the CloudFront logs stored in the S3 bucket. The results of the queries can be visualized using Amazon QuickSight, which provides powerful data visualization capabilities and easy-to-use dashboards. Together, Athena and QuickSight provide a cost-effective and scalable solution to analyze CloudFront logs and build visualizations.
upvoted 4 times
Joan111edu 4 months, 1 week ago
bdp123 4 months, 1 week ago
https://aws.amazon.com/blogs/big-data/harmonize-query-and-visualize-data-from-various-providers-using-aws-glue-amazon-athena-and-amazon-quicksight/
https://docs.aws.amazon.com/comprehend/latest/dg/tutorial-reviews-visualize.html
upvoted 2 times
tellmenowwwww 4 months ago attached file realted with B upvoted 1 times
Question #281 Topic 1
A company runs a fleet of web servers using an Amazon RDS for PostgreSQL DB instance. After a routine compliance check, the company sets a standard that requires a recovery point objective (RPO) of less than 1 second for all its production databases.
Which solution meets these requirements?
Enable a Multi-AZ deployment for the DB instance.
Enable auto scaling for the DB instance in one Availability Zone.
Configure the DB instance in one Availability Zone, and create multiple read replicas in a separate Availability Zone.
Configure the DB instance in one Availability Zone, and configure AWS Database Migration Service (AWS DMS) change data capture (CDC) tasks.
Community vote distribution
A (100%)
KZM Highly Voted 4 months, 1 week ago
A:
By using Multi-AZ deployment, the company can achieve an RPO of less than 1 second because the standby instance is always in sync with the primary instance, ensuring that data changes are continuously replicated.
upvoted 7 times
rrharris Highly Voted 4 months, 1 week ago
Correct Answer is A
upvoted 7 times
FFO Most Recent 2 months, 2 weeks ago
Used for DR. Every single change is replicated in a standby AZ. If we lose the main AZ, (uses the same DNS name) standby becomes automatic failover and the new main DB.
upvoted 2 times
TariqKipkemei 3 months ago
Answer is A
High availability = Multi AZ
upvoted 1 times
Steve_4542636 3 months, 4 weeks ago
ManOnTheMoon 4 months, 1 week ago
Agree with A
upvoted 1 times
LuckyAro 4 months, 1 week ago
Multi-AZ is a synchronous communication with the Master in "real time" and fail over will be almost instant.
upvoted 2 times
GwonLEE 4 months, 1 week ago
Namrash 4 months, 1 week ago
A should be correct
upvoted 2 times
Joan111edu 4 months, 1 week ago
should be A
upvoted 2 times
Question #282 Topic 1
A company runs a web application that is deployed on Amazon EC2 instances in the private subnet of a VPC. An Application Load Balancer (ALB) that extends across the public subnets directs web traffic to the EC2 instances. The company wants to implement new security measures to
restrict inbound traffic from the ALB to the EC2 instances while preventing access from any other source inside or outside the private subnet of the EC2 instances.
Which solution will meet these requirements?
Configure a route in a route table to direct traffic from the internet to the private IP addresses of the EC2 instances.
Configure the security group for the EC2 instances to only allow traffic that comes from the security group for the ALB.
Move the EC2 instances into the public subnet. Give the EC2 instances a set of Elastic IP addresses.
Configure the security group for the ALB to allow any TCP traffic on any port.
Community vote distribution
B (100%)
Abrar2022 3 weeks, 5 days ago
Read the discussion, that’s the whole point why examtopics picks the wrong answer. Follow most voted answer not examtopics answer
upvoted 1 times
antropaws 1 month ago
It's very confusing that the system marks C as correct.
upvoted 1 times
FFO 2 months, 2 weeks ago
This is B. Question already tells us they only want ONLY traffic from the ALB.
upvoted 1 times
TariqKipkemei 3 months ago
Answer is B
upvoted 1 times
GalileoEC2 3 months, 1 week ago
Why C! another cazy answer , If i am concern about security why I would want to expose my EC2 to the public internet,not make sense at all, am I correct with this? I also go with B
upvoted 2 times
LuckyAro 4 months, 1 week ago
B is the correct answer.
upvoted 2 times
kpato87 4 months, 1 week ago
configure the security group for the EC2 instances to only allow traffic that comes from the security group for the ALB. This ensures that only the traffic originating from the ALB is allowed access to the EC2 instances in the private subnet, while denying any other traffic from other sources. The other options do not provide a suitable solution to meet the stated requirements.
upvoted 2 times
Bhawesh 4 months, 1 week ago
Configure the security group for the EC2 instances to only allow traffic that comes from the security group for the ALB.
upvoted 3 times
Question #283 Topic 1
A research company runs experiments that are powered by a simulation application and a visualization application. The simulation application
runs on Linux and outputs intermediate data to an NFS share every 5 minutes. The visualization application is a Windows desktop application that displays the simulation output and requires an SMB file system.
The company maintains two synchronized file systems. This strategy is causing data duplication and inefficient resource usage. The company needs to migrate the applications to AWS without making code changes to either application.
Which solution will meet these requirements?
Migrate both applications to AWS Lambda. Create an Amazon S3 bucket to exchange data between the applications.
Migrate both applications to Amazon Elastic Container Service (Amazon ECS). Configure Amazon FSx File Gateway for storage.
Migrate the simulation application to Linux Amazon EC2 instances. Migrate the visualization application to Windows EC2 instances. Configure Amazon Simple Queue Service (Amazon SQS) to exchange data between the applications.
Migrate the simulation application to Linux Amazon EC2 instances. Migrate the visualization application to Windows EC2 instances. Configure Amazon FSx for NetApp ONTAP for storage.
Community vote distribution
D (93%) 7%
LuckyAro Highly Voted 4 months, 1 week ago
Amazon FSx for NetApp ONTAP provides shared storage between Linux and Windows file systems.
upvoted 6 times
rrharris Highly Voted 4 months, 1 week ago
Answer is D
upvoted 6 times
Abrar2022 Most Recent 3 weeks, 5 days ago
For shared storage between Linux and windows you need to implement Amazon FSx for NetApp ONTAP
upvoted 1 times
kruasan 2 months ago
This solution satisfies the needs in the following ways:
Amazon EC2 provides a seamless migration path for the existing server-based applications without code changes. The simulation app can run on Linux EC2 instances and the visualization app on Windows EC2 instances.
Amazon FSx for NetApp ONTAP provides highly performant file storage that is accessible via both NFS and SMB. This allows the simulation app to write to NFS shares as currently designed, and the visualization app to access the same data via SMB.
FSx for NetApp ONTAP ensures the data is synchronized and up to date across the file systems. This addresses the data duplication issues of the current setup.
Resources can be scaled efficiently since EC2 and FSx provide scalable compute and storage on demand.
upvoted 3 times
kruasan 2 months ago
The other options would require more significant changes:
Migrating to Lambda would require re-architecting both applications and not meet the requirement to avoid code changes. S3 does not provide file system access.
While ECS could run the apps without code changes, FSx File Gateway only provides S3 or EFS storage, neither of which offer both NFS and SMB access. Data exchange would still be an issue.
Using SQS for data exchange between EC2 instances would require code changes to implement a messaging system rather than a shared file system.
upvoted 1 times
Wael216 3 months, 4 weeks ago
windows => FSX
we didn't mention containers => can't be ECS
upvoted 1 times
everfly 4 months, 1 week ago
Amazon FSx for NetApp ONTAP is a fully managed service that provides shared file storage built on NetApp’s popular ONTAP file system. It supports NFS, SMB, and iSCSI protocols2 and also allows multi-protocol access to the same data
upvoted 1 times
Yechi 4 months, 1 week ago
Amazon FSx for NetApp ONTAP is a fully-managed shared storage service built on NetApp’s popular ONTAP file system. Amazon FSx for NetApp ONTAP provides the popular features, performance, and APIs of ONTAP file systems with the agility, scalability, and simplicity of a fully managed AWS service, making it easier for customers to migrate on-premises applications that rely on NAS appliances to AWS. FSx for ONTAP file systems are similar to on-premises NetApp clusters. Within each file system that you create, you also create one or more storage virtual machines (SVMs). These are isolated file servers each with their own endpoints for NFS, SMB, and management access, as well as authentication (for both administration and end-user data access). In turn, each SVM has one or more volumes which store your data. https://aws.amazon.com/de/blogs/storage/getting-started-cloud-file-storage-with-amazon-fsx-for-netapp-ontap-using-netapp-management-tools/
upvoted 3 times
zTopic 4 months, 1 week ago
B is correct I believe
upvoted 1 times
Question #284 Topic 1
As part of budget planning, management wants a report of AWS billed items listed by user. The data will be used to create department budgets. A solutions architect needs to determine the most efficient way to obtain this report information.
Which solution meets these requirements?
Run a query with Amazon Athena to generate the report.
Create a report in Cost Explorer and download the report.
Access the bill details from the billing dashboard and download the bill.
Modify a cost budget in AWS Budgets to alert with Amazon Simple Email Service (Amazon SES).
Community vote distribution
B (100%)
DagsH 3 months, 1 week ago
Cost Explorer looks at the usage pattern or history
upvoted 2 times
WherecanIstart 3 months, 2 weeks ago
pcops 4 months, 1 week ago
Answer is B
upvoted 2 times
fulingyu288 4 months, 1 week ago
rrharris 4 months, 1 week ago
Answer is B
upvoted 2 times
Question #285 Topic 1
A company hosts its static website by using Amazon S3. The company wants to add a contact form to its webpage. The contact form will have dynamic server-side components for users to input their name, email address, phone number, and user message. The company anticipates that there will be fewer than 100 site visits each month.
Which solution will meet these requirements MOST cost-effectively?
Host a dynamic contact form page in Amazon Elastic Container Service (Amazon ECS). Set up Amazon Simple Email Service (Amazon SES) to connect to any third-party email provider.
Create an Amazon API Gateway endpoint with an AWS Lambda backend that makes a call to Amazon Simple Email Service (Amazon SES).
Convert the static webpage to dynamic by deploying Amazon Lightsail. Use client-side scripting to build the contact form. Integrate the form with Amazon WorkMail.
Create a t2.micro Amazon EC2 instance. Deploy a LAMP (Linux, Apache, MySQL, PHP/Perl/Python) stack to host the webpage. Use client-side scripting to build the contact form. Integrate the form with Amazon WorkMail.
Community vote distribution
B (87%) 13%
obatunde Highly Voted 4 months, 1 week ago
Correct answer is B. https://aws.amazon.com/blogs/architecture/create-dynamic-contact-forms-for-s3-static-websites-using-aws-lambda-amazon-api-gateway-and-amazon-ses/
upvoted 5 times
kruasan Most Recent 2 months ago
This solution is the most cost-efficient for the anticipated 100 monthly visits because:
API Gateway charges are based on API calls. With only 100 visits, charges would be minimal.
AWS Lambda provides compute time for the backend code in increments of 100ms, so charges would also be negligible for this workload.
Amazon SES is used only for sending emails from the submitted contact forms. SES has a generous free tier of 62,000 emails per month, so there would be no charges for sending the contact emails.
No EC2 instances or other infrastructure needs to be run and paid for.
upvoted 2 times
datz 2 months, 2 weeks ago
B would be cheaper than option D,
Member only 100 site visits per month, so you are comparing API GW used 100 times a month with constantly running EC2...
upvoted 1 times
Steve_4542636 3 months, 4 weeks ago
Both api gateway and lambda are serverless so charges apply only on the 100 form submissions per month
upvoted 1 times
bdp123 4 months ago
After looking at cost of Workmail compared to SES - probably 'B' is better
upvoted 2 times
bdp123 4 months ago
Create a t2 micro Amazon EC2 instance. Deploy a LAMP (Linux Apache MySQL, PHP/Perl/Python) stack to host the webpage (free open-source). Use client-side scripting to build the contact form. Integrate the form with Amazon WorkMail. This solution will provide the company with the necessary components to host the contact form page and integrate it with Amazon WorkMail at the lowest cost. Option A requires the use of Amazon ECS, which is more expensive than EC2, and Option B requires the use of Amazon API Gateway, which is also more expensive than EC2. Option C requires the use of Amazon Lightsail, which is more expensive than EC2.
https://aws.amazon.com/what-is/lamp-stack/
upvoted 1 times
SkyZeroZx 1 month, 4 weeks ago
3 millon API Gateway == 3,50 USD (EE.UU. Este (Ohio)) Is more cheaper letter B https://aws.amazon.com/es/api-gateway/pricing/
https://aws.amazon.com/es/lambda/pricing/
upvoted 1 times
Palanda 4 months, 1 week ago
It's B
upvoted 1 times
LuckyAro 4 months, 1 week ago
B allows the company to create an API endpoint using AWS Lambda, which is a cost-effective and scalable solution for a contact form with low traffic. The backend can make a call to Amazon SES to send email notifications, which simplifies the process and reduces complexity.
upvoted 1 times
cloudbusting 4 months, 1 week ago
it is B : https://aws.amazon.com/blogs/architecture/create-dynamic-contact-forms-for-s3-static-websites-using-aws-lambda-amazon-api-gateway-and-amazon-ses/
upvoted 3 times
bdp123 4 months, 1 week ago
https://docs.aws.amazon.com/lambda/latest/dg/services-apigateway.html Using AWS Lambda with Amazon API Gateway - AWS Lambda https://docs.aws.amazon.com/lambda/latest/dg/services-apigateway.html https://aws.amazon.com/lambda/faqs/
AWS Lambda FAQs https://aws.amazon.com/lambda/faqs/
upvoted 1 times
Question #286 Topic 1
A company has a static website that is hosted on Amazon CloudFront in front of Amazon S3. The static website uses a database backend. The company notices that the website does not reflect updates that have been made in the website’s Git repository. The company checks the continuous integration and continuous delivery (CI/CD) pipeline between the Git repository and Amazon S3. The company verifies that the webhooks are configured properly and that the CI/CD pipeline is sending messages that indicate successful deployments.
A solutions architect needs to implement a solution that displays the updates on the website. Which solution will meet these requirements?
Add an Application Load Balancer.
Add Amazon ElastiCache for Redis or Memcached to the database layer of the web application.
Invalidate the CloudFront cache.
Use AWS Certificate Manager (ACM) to validate the website’s SSL certificate.
Community vote distribution
C (92%) 8%
fulingyu288 Highly Voted 4 months, 1 week ago
Invalidate the CloudFront cache: The solutions architect should invalidate the CloudFront cache to ensure that the latest version of the website is being served to users.
upvoted 6 times
kruasan Most Recent 2 months ago
Since the static website is hosted behind CloudFront, updates made to the S3 bucket will not be visible on the site until the CloudFront cache expires or is invalidated. By invalidating the CloudFront cache after deploying updates, the latest version in S3 will be pulled and the updates will then appear on the live site.
upvoted 1 times
RoroJ 1 month ago
Isn't that C?
upvoted 2 times
Namrash 4 months, 1 week ago B should be the right one upvoted 1 times
Neorem 4 months, 1 week ago
We need to create an Cloudfront invalidation
upvoted 2 times
Bhawesh 4 months, 1 week ago
Invalidate the CloudFront cache.
Problem is the CF cache. After invalidating the CloudFront cache, CF will be forces to read the updated static page from the S3 and the S3 changes will start being visible.
upvoted 3 times
Question #287 Topic 1
A company wants to migrate a Windows-based application from on premises to the AWS Cloud. The application has three tiers: an application tier, a business tier, and a database tier with Microsoft SQL Server. The company wants to use specific features of SQL Server such as native backups and Data Quality Services. The company also needs to share files for processing between the tiers.
How should a solutions architect design the architecture to meet these requirements?
Host all three tiers on Amazon EC2 instances. Use Amazon FSx File Gateway for file sharing between the tiers.
Host all three tiers on Amazon EC2 instances. Use Amazon FSx for Windows File Server for file sharing between the tiers.
Host the application tier and the business tier on Amazon EC2 instances. Host the database tier on Amazon RDS. Use Amazon Elastic File System (Amazon EFS) for file sharing between the tiers.
Host the application tier and the business tier on Amazon EC2 instances. Host the database tier on Amazon RDS. Use a Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon EBS) volume for file sharing between the tiers.
Community vote distribution
B (90%) 10%
KZM Highly Voted 4 months ago
It is B:
A: Incorrect> FSx file Gateway designed for low latency and efficient access to in-cloud FSx for Windows File Server file shares from your onpremises facility.
B: Correct> This solution will allow the company to host all three tiers on Amazon EC2 instances while using Amazon FSx for Windows File Server to provide Windows-based file sharing between the tiers. This will allow the company to use specific features of SQL Server, such as native backups and Data Quality Services, while sharing files for processing between the tiers.
C: Incorrect> Currently, Amazon EFS supports the NFSv4.1 protocol and does not natively support the SMB protocol, and can't be used in Windows instances yet.
D: Incorrect> Amazon EBS is a block-level storage solution that is typically used to store data at the operating system level, rather than for file sharing between servers.
upvoted 7 times
Abrar2022 Most Recent 3 weeks, 5 days ago
The question mentions Microsoft = windows EFS is Linux
upvoted 1 times
kruasan 2 months ago
This design satisfies the needs in the following ways:
Running all tiers on EC2 allows using SQL Server on EC2 with its native features like backups and Data Quality Services. SQL Server cannot be run directly on RDS.
Amazon FSx for Windows File Server provides fully managed Windows file storage with SMB access. This allows sharing files between the Windows EC2 instances for all three tiers.
FSx for Windows File Server has high performance, so it can handle file sharing needs between the tiers.
upvoted 1 times
kruasan 2 months ago
The other options would not meet requirements:
A. FSx File Gateway only provides access to S3 or EFS storage. It cannot be used directly for Windows file sharing.
C. RDS cannot run SQL Server or its native tools. The database tier needs to run on EC2.
D. EBS volumes can only be attached to a single EC2 instance. They cannot be shared between tiers for file exchanges.
upvoted 1 times
ManOnTheMoon 4 months, 1 week ago
Why not C?
upvoted 1 times
KZM 4 months ago
Currently, Amazon EFS supports the NFSv4.1 protocol and does not natively support the SMB protocol, and can't be used in Windows instances yet.
upvoted 2 times
AlmeroSenior 4 months, 1 week ago
Yup B . RDS will not work , Native Backup only to S3 , and Data Quality is not supported , so all EC2 . https://aws.amazon.com/premiumsupport/knowledge-center/native-backup-rds-sql-server/ and https://www.sqlserver-dba.com/2021/07/aws-rds-sql-server-limitations.html
upvoted 2 times
LuckyAro 4 months, 1 week ago
After further research, I concur that the correct answer is B. Native Back up and Data Quality not supported on RDS for Ms SQL
upvoted 2 times
LuckyAro 4 months, 1 week ago
C.
Host the application tier and the business tier on Amazon EC2 instances. Host the database tier on Amazon RDS.
Use Amazon Elastic File System (Amazon EFS) for file sharing between the tiers.
This solution allows the company to use specific features of SQL Server such as native backups and Data Quality Services, by hosting the database tier on Amazon RDS. It also enables file sharing between the tiers using Amazon EFS, which is a fully managed, highly available, and scalable file system. Amazon EFS provides shared access to files across multiple instances, which is important for processing files between the tiers. Additionally, hosting the application and business tiers on Amazon EC2 instances provides the company with the flexibility to configure and manage the environment according to their requirements.
upvoted 1 times
rushi0611 1 month, 3 weeks ago
How are you gonna connect the EFS to windows based ??
upvoted 1 times
Yechi 4 months, 1 week ago
Data Quality Services: If this feature is critical to your workload, consider choosing Amazon RDS Custom or Amazon EC2. https://docs.aws.amazon.com/prescriptive-guidance/latest/migration-sql-server/comparison.html
upvoted 3 times
Bhawesh 4 months, 1 week ago
Question #288 Topic 1
A company is migrating a Linux-based web server group to AWS. The web servers must access files in a shared file store for some content. The company must not make any changes to the application.
What should a solutions architect do to meet these requirements?
A. Create an Amazon S3 Standard bucket with access to the web servers.
B. Configure an Amazon CloudFront distribution with an Amazon S3 bucket as the origin.
C. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system on all web servers.
D. Configure a General Purpose SSD (gp3) Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume to all web servers.
Community vote distribution
C (100%)
Bhawesh Highly Voted 4 months, 1 week ago
Since no code change is permitted, below choice makes sense for the unix server's file sharing:
C. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system on all web servers.
upvoted 10 times
antropaws Most Recent 1 month ago
kruasan 2 months ago
This solution satisfies the needs in the following ways:
EFS provides a fully managed elastic network file system that can be mounted on multiple EC2 instances concurrently.
The EFS file system appears as a standard file system mount on the Linux web servers, requiring no application changes. The servers can access shared files as if they were on local storage.
EFS is highly available, durable, and scalable, providing a robust shared storage solution.
upvoted 1 times
kruasan 2 months ago
The other options would require modifying the application or do not provide a standard file system:
A. S3 does not provide a standard file system mount or share. The application would need to be changed to access S3 storage.
B. CloudFront is a content delivery network and caching service. It does not provide a file system mount or share and would require application changes.
D. EBS volumes can only attach to a single EC2 instance. They cannot be mounted by multiple servers concurrently and do not provide a shared file system.
upvoted 1 times
Steve_4542636 3 months, 4 weeks ago
No application changes are allowed and EFS is compatible with Linux
upvoted 1 times
LuckyAro 4 months, 1 week ago
C is the answer:
Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system on all web servers.
To meet the requirements of providing a shared file store for Linux-based web servers without making changes to the application, using an Amazon EFS file system is the best solution.
Amazon EFS is a managed NFS file system service that provides shared access to files across multiple Linux-based instances, which makes it suitable for this use case.
Amazon S3 is not ideal for this scenario since it is an object storage service and not a file system, and it requires additional tools or libraries to mount the S3 bucket as a file system.
Amazon CloudFront can be used to improve content delivery performance but is not necessary for this requirement.
Additionally, Amazon EBS volumes can only be mounted to one instance at a time, so it is not suitable for sharing files across multiple instances.
upvoted 2 times
Karlos99 3 months, 3 weeks ago
But what about aws ebs multi attach?
upvoted 2 times
elearningtakai 3 months ago
Amazon EBS Multi-Attach enables you to attach a single Provisioned IOPS SSD (io1 or io2) volume to multiple instances. EBS General Purpose SSD (gp3) doesn't support Multi-Attach
upvoted 1 times
Question #289 Topic 1
A company has an AWS Lambda function that needs read access to an Amazon S3 bucket that is located in the same AWS account. Which solution will meet these requirements in the MOST secure manner?
Apply an S3 bucket policy that grants read access to the S3 bucket.
Apply an IAM role to the Lambda function. Apply an IAM policy to the role to grant read access to the S3 bucket.
Embed an access key and a secret key in the Lambda function’s code to grant the required IAM permissions for read access to the S3 bucket.
Apply an IAM role to the Lambda function. Apply an IAM policy to the role to grant read access to all S3 buckets in the account.
Community vote distribution
B (100%)
antropaws 1 month ago
kruasan 2 months ago
This solution satisfies the needs in the most secure manner:
An IAM role provides temporary credentials to the Lambda function to access AWS resources. The function does not have persistent credentials.
The IAM policy grants least privilege access by specifying read access only to the specific S3 bucket needed. Access is not granted to all S3 buckets.
If the Lambda function is compromised, the attacker would only gain access to the one specified S3 bucket. They would not receive broad access to resources.
upvoted 1 times
kruasan 2 months ago
The other options are less secure:
A bucket policy grants open access to a resource. It is a less granular way to provide access and grants more privilege than needed.
Embedding access keys in code is extremely insecure and against best practices. The keys provide full access and are at major risk of compromise if the code leaks.
Granting access to all S3 buckets provides far too much privilege if only one bucket needs access. It greatly expands the impact if compromised.
upvoted 1 times
Dr_Chomp 2 months, 2 weeks ago
you dont want to grant access to all S3 buckets (which is answer D) - only the one identified (so answer A)
upvoted 1 times
Steve_4542636 3 months, 4 weeks ago
B is only for one bucket and you want to use Role based security here.
upvoted 1 times
Ja13 4 months ago
C, it says MOST secure manner, so only to one bucket
upvoted 1 times
Joxtat 4 months, 1 week ago
https://docs.aws.amazon.com/lambda/latest/dg/lambda-permissions.html
upvoted 1 times
kpato87 4 months, 1 week ago
This is the most secure and recommended way to provide an AWS Lambda function with access to an S3 bucket. It involves creating an IAM role that the Lambda function assumes, and attaching an IAM policy to the role that grants the necessary permissions to read from the S3 bucket.
upvoted 3 times
Joan111edu 4 months, 1 week ago
B. Least of privilege
upvoted 2 times
Question #290 Topic 1
A company hosts a web application on multiple Amazon EC2 instances. The EC2 instances are in an Auto Scaling group that scales in response to user demand. The company wants to optimize cost savings without making a long-term commitment.
Which EC2 instance purchasing option should a solutions architect recommend to meet these requirements?
Dedicated Instances only
On-Demand Instances only
A mix of On-Demand Instances and Spot Instances
A mix of On-Demand Instances and Reserved Instances
Community vote distribution
C (90%) 10%
Abrar2022 3 weeks, 5 days ago
It's about COST, not operational efficiency for this question.
upvoted 1 times
kraken21 2 months, 4 weeks ago
Autoscaling with ALB / scale up on demand using on demand and spot instance combination makes sense. Reserved will not fit the no-long term commitment clause.
upvoted 1 times
WherecanIstart 3 months ago
Without commitment Spot instances
upvoted 1 times
cegama543 3 months, 2 weeks ago
If the company wants to optimize cost savings without making a long-term commitment, then using only On-Demand Instances may not be the most cost-effective option. Spot Instances can be significantly cheaper than On-Demand Instances, but they come with the risk of being interrupted if the Spot price increases above your bid price. If the company is willing to accept this risk, a mix of On-Demand Instances and Spot Instances may be the best option to optimize cost savings while maintaining the desired level of scalability.
However, if the company wants the most predictable pricing and does not want to risk instance interruption, then using only On-Demand Instances is a good choice. It ultimately depends on the company's priorities and risk tolerance.
upvoted 1 times
Steve_4542636 3 months, 4 weeks ago
It's about COST, not operational efficiency for this question.
upvoted 1 times
Samuel03 4 months ago
bdp123 4 months, 1 week ago
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-mixed-instances-groups.html
upvoted 1 times
AlmeroSenior 4 months, 1 week ago
C - WEB apps , mostly Stateless , and ASG support OnDemand and Spot mix , in fact , you can prioritize to have Ondemand , before it uses Spot > https://docs.aws.amazon.com/autoscaling/ec2/userguide/launch-template-spot-instances.html
upvoted 1 times
designmood22 4 months, 1 week ago
Answer : C. A mix of On-Demand Instances and Spot Instances
upvoted 1 times
LuckyAro 4 months, 1 week ago
To optimize cost savings without making a long-term commitment, a mix of On-Demand Instances and Spot Instances would be the best EC2 instance purchasing option to recommend.
By combining On-Demand and Spot Instances, the company can take advantage of the cost savings offered by Spot Instances during periods of low demand while maintaining the reliability and stability of On-Demand Instances during periods of high demand. This provides a cost-effective solution that can scale with user demand without making a long-term commitment.
upvoted 1 times
NolaHOla 4 months, 1 week ago
In this scenario, a mix of On-Demand Instances and Spot Instances is the most cost-effective option, as it can provide significant cost savings while maintaining application availability. The Auto Scaling group can be configured to launch Spot Instances when the demand is high and On-Demand Instances when demand is low or when Spot Instances are not available. This approach provides a balance between cost savings and reliability.
upvoted 3 times
minglu 4 months, 1 week ago
In my opinion, it is C, on demand instances and spot instances can be in a single auto scaling group.
upvoted 3 times
Question #291 Topic 1
A media company uses Amazon CloudFront for its publicly available streaming video content. The company wants to secure the video content
that is hosted in Amazon S3 by controlling who has access. Some of the company’s users are using a custom HTTP client that does not support cookies. Some of the company’s users are unable to change the hardcoded URLs that they are using for access.
Which services or methods will meet these requirements with the LEAST impact to the users? (Choose two.)
Signed cookies
Signed URLs
AWS AppSync
JSON Web Token (JWT)
AWS Secrets Manager
Community vote distribution
AB (85%) Other
leoattf Highly Voted 4 months ago
I thought that option A was totally wrong, because the question mentions "HTTP client does not support cookies". However it is right, along with option B. Check the link bellow, first paragraph.
https://aws.amazon.com/blogs/media/secure-content-using-cloudfront-functions/
upvoted 12 times
Steve_4542636 3 months, 4 weeks ago
Thanks for this! What a tricky question. If the client doesn't support cookies, THEN they use the signed S3 Urls.
upvoted 4 times
johnmcclane78 Highly Voted 3 months, 3 weeks ago
Signed URLs - This method allows the media company to control who can access the video content by creating a time-limited URL with a cryptographic signature. This URL can be distributed to the users who are unable to change the hardcoded URLs they are using for access, and they can access the content without needing to support cookies.
D. JSON Web Token (JWT) - This method allows the media company to control who can access the video content by creating a secure token that contains user authentication and authorization information. This token can be distributed to the users who are using a custom HTTP client that does not support cookies. The users can include this token in their requests to access the content without needing to support cookies.
Therefore, options B and D are the correct answers.
Option A (Signed cookies) would not work for users who are using a custom HTTP client that does not support cookies. Option C (AWS AppSync) is not relevant to the requirement of securing video content. Option E (AWS Secrets Manager) is a service used for storing and retrieving secrets, which is not relevant to the requirement of securing video content.
upvoted 9 times
MrAWSAssociate Most Recent 1 week ago
These are the right answers!
upvoted 1 times
DrWatson 3 weeks, 5 days ago
"Some of the company’s users" does not support cookies, then they'll use Signed URLs.
"Some of the company’s users" are unable to change the hardcoded URLs, then they'll use Signed cookies.
upvoted 1 times
kruasan 2 months ago
Signed cookies would allow the media company to authorize access to related content (like HLS video segments) with a single signature, minimizing implementation overhead. This works for users that can support cookies.
Signed URLs would allow the media company to sign each URL individually to control access, supporting users that cannot use cookies. By embedding the signature in the URL, existing hardcoded URLs would not need to change.
upvoted 1 times
kruasan 2 months ago
AWS AppSync - This is for building data-driven apps with real-time and offline capabilities. It does not directly help with securing streaming content.
JSON Web Token (JWT) - Although JWTs can be used for authorization, they would require the client to get a token and validate/check access on the server for each request. This does not work for hardcoded URLs and minimizes impact.
AWS Secrets Manager - This service is for managing secrets, not for controlling access to resources. It would not meet the requirements.
upvoted 1 times
A. Signed cookies: CloudFront signed cookies allow you to control who can access your content when you don't want to change your current URLs. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-cookies.html
B. Signed URLs: This method allows the media company to control who can access the video content by creating a time-limited URL with a cryptographic signature.
upvoted 1 times
ahilan26 2 months, 2 weeks ago
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-choosing-signed-urls-cookies.html
upvoted 2 times
CapJackSparrow 3 months, 2 weeks ago
Some of the company’s users are using a custom HTTP client that does not support cookies.
**Singned URLS
Some of the company’s users are unable to change the hardcoded URLs that they are using for access. **Signed cookies
upvoted 5 times
TungPham 3 months, 4 weeks ago
https://aws.amazon.com/vi/blogs/media/awse-protecting-your-media-assets-with-token-authentication/ JSON Web Token (JWT) need using with Lambda@Edge
upvoted 3 times
HaineHess 3 months, 4 weeks ago
b d seems good
upvoted 1 times
It says some use a custom HTTP client that does not support cookies - those will use signed URLs which has precedence over cookies https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-choosing-signed-urls-cookies.html
upvoted 1 times
AB is wrong, your point that cookies are disabled eliminates the use of signed cookies. The hard coding eliminates the use of signed URLs. so AB totally eliminated. read the article further not just the first few lines, the read up signed URLs
upvoted 1 times
B, D
Presigned URL uses the GET Parameter. That is, authentication is performed using Query String. The string containing Query String is a URI, not a URL. Therefore, B can be the answer.
The authentication method using JWT Token may use HTTP Header. This is not using cookies. Therefore, D can be the answer. Please understand even if the sentence is awkward. I am not an English speaker.
upvoted 1 times
ChrisG1454 4 months, 1 week ago
Using Appsync is possible
https://stackoverflow.com/questions/48495338/how-to-upload-file-to-aws-s3-using-aws-appsync
upvoted 1 times
B. Signed URLs: Signed URLs provide access to specific objects in Amazon S3 and can be generated with an expiration time, which means that the URL will only be valid for a specific period. This method does not require the use of cookies or changes to the hardcoded URLs used by some of the users.
D. JSON Web Token (JWT): JWT is a method for securely transmitting information between parties as a JSON object. It can be used to authenticate users and control access to resources, including streaming video content hosted in Amazon S3. This method does not require the use of cookies, and it can be used with custom HTTP clients that support header-based authentication.
Therefore, the media company can use Signed URLs and JWT to control access to their streaming video content hosted in Amazon S3, without impacting the users who are unable to change the hardcoded URLs they are using or those using a custom HTTP client that does not support cookies.
upvoted 1 times
TungPham 3 months, 4 weeks ago
https://aws.amazon.com/vi/blogs/media/awse-protecting-your-media-assets-with-token-authentication/ JSON Web Token (JWT) need using with Lambda@Edge
upvoted 1 times
TungPham 3 months, 4 weeks ago
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-choosing-signed-urls-cookies.html
upvoted 1 times
NolaHOla 4 months, 1 week ago
I would go A and B based on the question's description
upvoted 1 times
everfly 4 months, 1 week ago
Signed URLs are URLs that grant temporary access to an S3 object. They include a signature that verifies the authenticity of the request, as well as an expiration date that limits the time during which the URL is valid. This solution will work for users who are using custom HTTP clients that do not support cookies.
Signed cookies are similar to signed URLs, but they use cookies to grant temporary access to S3 objects. This solution will work for users who are unable to change the hardcoded URLs that they are using for access.
upvoted 3 times
Neha999 4 months, 1 week ago
The question says "custom HTTP client that does not support cookies". Then how can A be the answer ??
upvoted 2 times
Question #292 Topic 1
A company is preparing a new data platform that will ingest real-time streaming data from multiple sources. The company needs to transform the data before writing the data to Amazon S3. The company needs the ability to use SQL to query the transformed data.
Which solutions will meet these requirements? (Choose two.)
Use Amazon Kinesis Data Streams to stream the data. Use Amazon Kinesis Data Analytics to transform the data. Use Amazon Kinesis Data Firehose to write the data to Amazon S3. Use Amazon Athena to query the transformed data from Amazon S3.
Use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to stream the data. Use AWS Glue to transform the data and to write the data to Amazon S3. Use Amazon Athena to query the transformed data from Amazon S3.
Use AWS Database Migration Service (AWS DMS) to ingest the data. Use Amazon EMR to transform the data and to write the data to Amazon S3. Use Amazon Athena to query the transformed data from Amazon S3.
Use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to stream the data. Use Amazon Kinesis Data Analytics to transform the data and to write the data to Amazon S3. Use the Amazon RDS query editor to query the transformed data from Amazon S3.
Use Amazon Kinesis Data Streams to stream the data. Use AWS Glue to transform the data. Use Amazon Kinesis Data Firehose to write the data to Amazon S3. Use the Amazon RDS query editor to query the transformed data from Amazon S3.
Community vote distribution
AB (75%) AE (25%)
MrCloudy 2 months, 1 week ago
To transform real-time streaming data from multiple sources, write it to Amazon S3, and query the transformed data using SQL, the company can use the following solutions: Amazon Kinesis Data Streams, Amazon Kinesis Data Analytics, and Amazon Kinesis Data Firehose. The transformed data can be queried using Amazon Athena. Therefore, options A and E are the correct answers.
Option A is correct because it uses Amazon Kinesis Data Streams to stream data from multiple sources, Amazon Kinesis Data Analytics to transform the data, and Amazon Kinesis Data Firehose to write the data to Amazon S3. Amazon Athena can be used to query the transformed data in Amazon S3.
Option E is also correct because it uses Amazon Kinesis Data Streams to stream data from multiple sources, AWS Glue to transform the data, and Amazon Kinesis Data Firehose to write the data to Amazon S3. Amazon Athena can be used to query the transformed data in Amazon S3.
upvoted 3 times
Paras043 2 months, 2 weeks ago
But how can you transform data using kinesis data analytics ??
upvoted 1 times
luisgu 1 month, 3 weeks ago
See https://aws.amazon.com/kinesis/data-analytics/faqs/?nc=sn&loc=6
upvoted 1 times
kraken21 2 months, 4 weeks ago
DMS can move data from DBs to streaming services and cannot natively handle streaming data. Hence A.B makes sense. Also AWS Glue/ETL can handle MSK streaming https://docs.aws.amazon.com/glue/latest/dg/add-job-streaming.html.
upvoted 1 times
elearningtakai 3 months ago
The solutions that meet the requirements of streaming real-time data, transforming the data before writing to S3, and querying the transformed data using SQL are A and B.
Option C: This option is not ideal for streaming real-time data as AWS DMS is not optimized for real-time data ingestion.
Option D & E: These option are not recommended as the Amazon RDS query editor is not designed for querying data in S3, and it is not efficient for running complex queries.
upvoted 2 times
gold4otas 3 months ago
The correct answers are options A & B
upvoted 1 times
Steve_4542636 3 months, 4 weeks ago
OK, for B I did some research, https://docs.aws.amazon.com/glue/latest/dg/add-job-streaming.html
"You can create streaming extract, transform, and load (ETL) jobs that run continuously, consume data from streaming sources like Amazon Kinesis Data Streams, Apache Kafka, and Amazon Managed Streaming for Apache Kafka (Amazon MSK). The jobs cleanse and transform the data, and then load the results into Amazon S3 data lakes or JDBC data stores."
upvoted 4 times
TungPham 3 months, 4 weeks ago
may Amazon RDS query editor to query the transformed data from Amazon S3 ? i don't think so, plz get link docs to that
upvoted 1 times
ManOnTheMoon 4 months, 1 week ago
Why not A & D?
upvoted 1 times
TungPham 3 months, 4 weeks ago
may Amazon RDS query editor to query the transformed data from Amazon S3 ? i don't think so, plz get link docs to that
upvoted 1 times
LuckyAro 4 months, 1 week ago
A and B
upvoted 1 times
designmood22 4 months, 1 week ago
Answer is : A & B
upvoted 1 times
rrharris 4 months, 1 week ago
Answer is A and B
upvoted 2 times
NolaHOla 4 months, 1 week ago
A and B
upvoted 2 times
Question #293 Topic 1
A company has an on-premises volume backup solution that has reached its end of life. The company wants to use AWS as part of a new backup solution and wants to maintain local access to all the data while it is backed up on AWS. The company wants to ensure that the data backed up on AWS is automatically and securely transferred.
Which solution meets these requirements?
Use AWS Snowball to migrate data out of the on-premises solution to Amazon S3. Configure on-premises systems to mount the Snowball S3 endpoint to provide local access to the data.
Use AWS Snowball Edge to migrate data out of the on-premises solution to Amazon S3. Use the Snowball Edge file interface to provide onpremises systems with local access to the data.
Use AWS Storage Gateway and configure a cached volume gateway. Run the Storage Gateway software appliance on premises and configure a percentage of data to cache locally. Mount the gateway storage volumes to provide local access to the data.
Use AWS Storage Gateway and configure a stored volume gateway. Run the Storage Gateway software appliance on premises and map the gateway storage volumes to on-premises storage. Mount the gateway storage volumes to provide local access to the data.
Community vote distribution
D (100%)
Steve_4542636 Highly Voted 3 months, 4 weeks ago
The question states, "wants to maintain local access to all the data" This is storage gateway. Cached gateway stores only the frequently accessed data locally which is not what the problem statement asks for.
upvoted 8 times
kruasan Most Recent 2 months ago
The company wants to maintain local access to all the data. Only stored volumes keep the complete dataset on-premises, providing low-latency access. Cached volumes only cache a subset locally.
The company wants the data backed up on AWS. With stored volumes, periodic backups (snapshots) of the on-premises data are sent to S3, providing durable and scalable backup storage.
The company wants the data transfer to AWS to be automatic and secure. Storage Gateway provides an encrypted connection between the onpremises gateway and AWS storage. Backups to S3 are sent asynchronously and automatically based on the backup schedule configured.
upvoted 2 times
ChrisG1454 4 months, 1 week ago
Ans = D
https://docs.aws.amazon.com/storagegateway/latest/vgw/WhatIsStorageGateway.html
upvoted 3 times
Neha999 4 months, 1 week ago
D
upvoted 2 times
bdp123 4 months, 1 week ago
https://aws.amazon.com/storagegateway/faqs/#:~:text=In%20the%20cached%20mode%2C%20your,asynchronously%20backed%20up%20to%20A WS.
In the cached mode, your primary data is written to S3, while retaining your frequently accessed data locally in a cache for low-latency access.
In the stored mode, your primary data is stored locally and your entire dataset is available for low-latency access while asynchronously backed up to AWS.
upvoted 2 times
Question #294 Topic 1
An application that is hosted on Amazon EC2 instances needs to access an Amazon S3 bucket. Traffic must not traverse the internet. How should a solutions architect configure access to meet these requirements?
Create a private hosted zone by using Amazon Route 53.
Set up a gateway VPC endpoint for Amazon S3 in the VPC.
Configure the EC2 instances to use a NAT gateway to access the S3 bucket.
Establish an AWS Site-to-Site VPN connection between the VPC and the S3 bucket.
Community vote distribution
B (100%)
Steve_4542636 3 months, 4 weeks ago
S3 and DynamoDB are the only services with Gateway endpoint options
upvoted 2 times
ManOnTheMoon 4 months, 1 week ago
Agree with B
upvoted 1 times
jennyka76 4 months, 1 week ago
ANSWER - B
https://docs.aws.amazon.com/vpc/latest/privatelink/gateway-endpoints.htmlR B
upvoted 1 times
LuckyAro 4 months, 1 week ago
skiwili 4 months, 1 week ago
Question #295 Topic 1
An ecommerce company stores terabytes of customer data in the AWS Cloud. The data contains personally identifiable information (PII). The company wants to use the data in three applications. Only one of the applications needs to process the PII. The PII must be removed before the other two applications process the data.
Which solution will meet these requirements with the LEAST operational overhead?
Store the data in an Amazon DynamoDB table. Create a proxy application layer to intercept and process the data that each application requests.
Store the data in an Amazon S3 bucket. Process and transform the data by using S3 Object Lambda before returning the data to the requesting application.
Process the data and store the transformed data in three separate Amazon S3 buckets so that each application has its own custom dataset. Point each application to its respective S3 bucket.
Process the data and store the transformed data in three separate Amazon DynamoDB tables so that each application has its own custom dataset. Point each application to its respective DynamoDB table.
Community vote distribution
B (93%) 3%
fruto123 Highly Voted 4 months ago
B is the right answer and the proof is in this link.
https://aws.amazon.com/blogs/aws/introducing-amazon-s3-object-lambda-use-your-code-to-process-data-as-it-is-being-retrieved-from-s3/
upvoted 9 times
Steve_4542636 Highly Voted 3 months, 4 weeks ago
Actually this is what Macie is best used for.
upvoted 6 times
Abrar2022 Most Recent 3 weeks, 5 days ago
Store the data in an Amazon S3 bucket and using S3 Object Lambda to process and transform the data before returning it to the requesting application. This approach allows the PII to be removed in real-time and without the need to create separate datasets or tables for each application.
upvoted 1 times
antropaws 1 month ago
@fruto123 and everyone that upvoted:
Is it plausible that S3 Object Lambda can process terabytes of data in 60 seconds? The same link you shared states that the maximum duration for a Lambda function used by S3 Object Lambda is 60 seconds.
Answer is A.
upvoted 1 times
antropaws 1 month ago
Chat GPT:
Isn't just 60 seconds the maximum duration for a Lambda function used by S3 Object Lambda? How can it process terabytes of data in 60 seconds?
You are correct that the maximum duration for a Lambda function used by S3 Object Lambda is 60 seconds. Given the time constraint, it is not feasible to process terabytes of data within a single Lambda function execution.
S3 Object Lambda is designed for lightweight and real-time transformations rather than extensive processing of large datasets.
To handle terabytes of data, you would typically need to implement a distributed processing solution using services like Amazon EMR, AWS Glue, or AWS Batch. These services are specifically designed to handle big data workloads and provide scalability and distributed processing capabilities.
So, while S3 Object Lambda can be useful for lightweight processing tasks, it is not the appropriate tool for processing terabytes of data within the execution time limits of a Lambda function.
upvoted 1 times
kruasan 2 months ago
Storing the raw data in S3 provides a durable, scalable data lake. S3 requires little ongoing management overhead.
S3 Object Lambda can be used to filter and process the data on retrieval transparently. This minimizes operational overhead by avoiding the need to preprocess and store multiple transformed copies of the data.
Only one copy of the data needs to be stored and maintained in S3. S3 Object Lambda will transform the data on read based on the requesting application.
No additional applications or proxies need to be developed and managed to handle the data transformation. S3 Object Lambda provides this functionality.
upvoted 2 times
kruasan 2 months ago
Option A requires developing and managing a proxy app layer to handle data transformation, adding overhead.
Options C and D require preprocessing and storing multiple copies of the transformed data, adding storage and management overhead. Option B using S3 Object Lambda minimizes operational overhead by handling data transformation on read transparently using the native S3 functionality. Only one raw data copy is stored in S3, with no additional applications required.
upvoted 1 times
pagom 4 months ago
https://aws.amazon.com/ko/blogs/korea/introducing-amazon-s3-object-lambda-use-your-code-to-process-data-as-it-is-being-retrieved-from-s3/
upvoted 4 times
LuckyAro 4 months, 1 week ago
B is the correct answer.
Amazon S3 Object Lambda allows you to add custom code to S3 GET requests, which means that you can modify the data before it is returned to the requesting application. In this case, you can use S3 Object Lambda to remove the PII before the data is returned to the two applications that do not need to process PII. This approach has the least operational overhead because it does not require creating separate datasets or proxy application layers, and it allows you to maintain a single copy of the data in an S3 bucket.
upvoted 4 times
NolaHOla 4 months, 1 week ago
To meet the requirement of removing the PII before processing by two of the applications, it would be most efficient to use option B, which involves storing the data in an Amazon S3 bucket and using S3 Object Lambda to process and transform the data before returning it to the requesting application. This approach allows the PII to be removed in real-time and without the need to create separate datasets or tables for each application. S3 Object Lambda can be configured to automatically remove PII from the data before it is sent to the non-PII processing applications. This solution provides a cost-effective and scalable way to meet the requirement with the least operational overhead.
upvoted 2 times
minglu 4 months, 1 week ago
skiwili 4 months, 1 week ago
Looks like C is the correct answer
upvoted 1 times
Question #296 Topic 1
A development team has launched a new application that is hosted on Amazon EC2 instances inside a development VPC. A solutions architect
needs to create a new VPC in the same account. The new VPC will be peered with the development VPC. The VPC CIDR block for the development VPC is 192.168.0.0/24. The solutions architect needs to create a CIDR block for the new VPC. The CIDR block must be valid for a VPC peering connection to the development VPC.
What is the SMALLEST CIDR block that meets these requirements?
10.0.1.0/32
192.168.0.0/24
192.168.1.0/32
10.0.1.0/24
Community vote distribution
D (100%)
BrainOBrain Highly Voted 4 months, 1 week ago
10.0.1.0/32 and 192.168.1.0/32 are too small for VPC, and /32 network is only 1 host 192.168.0.0/24 is overlapping with existing VPC
upvoted 8 times
Abrar2022 Most Recent 3 weeks, 5 days ago
Definitely D. The only valid VPC CIDR block that does not overlap with the development VPC CIDR block among the options. The other 2 CIDR block options are too small.
upvoted 1 times
antropaws 1 month ago
kruasan 2 months ago
Option A (10.0.1.0/32) is invalid - a /32 CIDR prefix is a host route, not a VPC range.
Option B (192.168.0.0/24) overlaps the development VPC and so cannot be used.
Option C (192.168.1.0/32) is invalid - a /32 CIDR prefix is a host route, not a VPC range.
Option D (10.0.1.0/24) satisfies the non-overlapping CIDR requirement but is a larger block than needed. Since only two VPCs need to be peered, a /24 block provides more addresses than necessary.
upvoted 3 times
channn 2 months, 4 weeks ago
D is the only correct answer
upvoted 1 times
r04dB10ck 3 months, 1 week ago
only one valid with no overlap
upvoted 1 times
Steve_4542636 3 months, 4 weeks ago
A process by elimination solution here. a CIDR value is the number of bits that are lockeed so 10.0.0.0/32 means no range.
upvoted 2 times
LuckyAro 4 months, 1 week ago
Answer is D, 10.0.1.0/24.
upvoted 1 times
skiwili 4 months, 1 week ago
Yes D is the answer
upvoted 1 times
obatunde 4 months, 1 week ago
Definitely D. It is the only valid VPC CIDR block that does not overlap with the development VPC CIDR block among the options.
upvoted 1 times
bdp123 4 months, 1 week ago
The allowed block size is between a /28 netmask and /16 netmask.
The CIDR block must not overlap with any existing CIDR block that's associated with the VPC. https://docs.aws.amazon.com/vpc/latest/userguide/configure-your-vpc.html
upvoted 4 times
Question #297 Topic 1
A company deploys an application on five Amazon EC2 instances. An Application Load Balancer (ALB) distributes traffic to the instances by using a target group. The average CPU usage on each of the instances is below 10% most of the time, with occasional surges to 65%.
A solutions architect needs to implement a solution to automate the scalability of the application. The solution must optimize the cost of the architecture and must ensure that the application has enough CPU resources when surges occur.
Which solution will meet these requirements?
Create an Amazon CloudWatch alarm that enters the ALARM state when the CPUUtilization metric is less than 20%. Create an AWS Lambda function that the CloudWatch alarm invokes to terminate one of the EC2 instances in the ALB target group.
Create an EC2 Auto Scaling group. Select the existing ALB as the load balancer and the existing target group as the target group. Set a
target tracking scaling policy that is based on the ASGAverageCPUUtilization metric. Set the minimum instances to 2, the desired capacity to 3, the maximum instances to 6, and the target value to 50%. Add the EC2 instances to the Auto Scaling group.
Create an EC2 Auto Scaling group. Select the existing ALB as the load balancer and the existing target group as the target group. Set the minimum instances to 2, the desired capacity to 3, and the maximum instances to 6. Add the EC2 instances to the Auto Scaling group.
Create two Amazon CloudWatch alarms. Configure the first CloudWatch alarm to enter the ALARM state when the average CPUUtilization metric is below 20%. Configure the second CloudWatch alarm to enter the ALARM state when the average CPUUtilization matric is above 50%. Configure the alarms to publish to an Amazon Simple Notification Service (Amazon SNS) topic to send an email message. After receiving the message, log in to decrease or increase the number of EC2 instances that are running.
Community vote distribution
B (93%) 7%
bdp123 Highly Voted 4 months, 1 week ago
Just create an auto scaling policy
upvoted 8 times
RoroJ Most Recent 1 month ago
Auto Scaling group must have an AMI for it.
upvoted 1 times
th3k33n 1 month, 1 week ago
how can we set max to 6 since the company is using 5 ec2 instance
upvoted 1 times
examtopictempacc 1 month, 1 week ago
In the scenario you provided, you're setting up an Auto Scaling group to manage the instances for you, and the settings (min 2, desired 3, max 6) are for the Auto Scaling group, not for your existing instances. When you integrate the instances into the Auto Scaling group, you are effectively moving from a fixed instance count to a dynamic one that can range from 2 to 6 based on the demand.
The existing 5 instances can be included in the Auto Scaling group, but the group can reduce the number of instances if the load is low (to the minimum specified, which is 2 in this case) and can also add more instances (up to a maximum of 6) if the load increases.
upvoted 1 times
kruasan 2 months ago
Reasons:
An Auto Scaling group will automatically scale the EC2 instances to match changes in demand. This optimizes cost by only running as many instances as needed.
A target tracking scaling policy monitors the ASGAverageCPUUtilization metric and scales to keep the average CPU around the 50% target value. This ensures there are enough resources during CPU surges.
The ALB and target group are reused, so the application architecture does not change. The Auto Scaling group is associated to the existing load balancer setup.
A minimum of 2 and maximum of 6 instances provides the ability to scale between 3 and 6 instances as needed based on demand.
Costs are optimized by starting with only 3 instances (the desired capacity) and scaling up as needed. When CPU usage drops, instances are terminated to match the desired capacity.
upvoted 2 times
kruasan 2 months ago
Option A - terminates instances reactively based on low CPU and may not provide enough capacity during surges. Does not optimize cost. Option C - lacks a scaling policy so will not automatically adjust capacity based on changes in demand. Does not ensure enough resources during surges.
Option D - requires manual intervention to scale capacity. Does not optimize cost or provide an automated solution.
upvoted 1 times
darn 2 months ago
as you dig down the question, they get more and more bogus with less and less votes
upvoted 1 times
Steve_4542636 3 months, 4 weeks ago
KZM 4 months ago
Based on the information given, the best solution is option"B".
Autoscaling group with target tracking scaling policy with min 2 instances, desired capacity to 3, and the maximum instances to 6.
upvoted 1 times
Shrestwt 2 months, 1 week ago
But the company is using only 5 EC2 Instances so how can we set maximum instance to 6.
upvoted 2 times
LuckyAro 4 months, 1 week ago
B is the correct solution because it allows for automatic scaling based on the average CPU utilization of the EC2 instances in the target group. With the use of a target tracking scaling policy based on the ASGAverageCPUUtilization metric, the EC2 Auto Scaling group can ensure that the target value of 50% is maintained while scaling the number of instances in the group up or down as needed. This will help ensure that the application has enough CPU resources during surges without overprovisioning, thus optimizing the cost of the architecture.
upvoted 1 times
Babba 4 months, 1 week ago
Question #298 Topic 1
A company is running a critical business application on Amazon EC2 instances behind an Application Load Balancer. The EC2 instances run in an Auto Scaling group and access an Amazon RDS DB instance.
The design did not pass an operational review because the EC2 instances and the DB instance are all located in a single Availability Zone. A solutions architect must update the design to use a second Availability Zone.
Which solution will make the application highly available?
A. Provision a subnet in each Availability Zone. Configure the Auto Scaling group to distribute the EC2 instances across both Availability Zones. Configure the DB instance with connections to each network.
B. Provision two subnets that extend across both Availability Zones. Configure the Auto Scaling group to distribute the EC2 instances across both Availability Zones. Configure the DB instance with connections to each network.
C. Provision a subnet in each Availability Zone. Configure the Auto Scaling group to distribute the EC2 instances across both Availability Zones. Configure the DB instance for Multi-AZ deployment.
D. Provision a subnet that extends across both Availability Zones. Configure the Auto Scaling group to distribute the EC2 instances across both Availability Zones. Configure the DB instance for Multi-AZ deployment.
Community vote distribution
C (100%)
bdp123 Highly Voted 4 months, 1 week ago
A subnet must reside within a single Availability Zone. https://aws.amazon.com/vpc/faqs/#:~:text=Can%20a%20subnet%20span%20Availability,within%20a%20single%20Availability%20Zone.
upvoted 9 times
MrAWSAssociate Most Recent 1 week ago
D is completely wrong, because each subnet must reside entirely within one Availability Zone and cannot span zones. By launching AWS resources in separate Availability Zones, you can protect your applications from the failure of a single Availability Zone.
upvoted 1 times
Anmol_1010 2 weeks, 1 day ago The key word here was extend. upvoted 1 times
GalileoEC2 3 months, 1 week ago
This discards B and D: Subnet basics. Each subnet must reside entirely within one Availability Zone and cannot span zones. By launching AWS resources in separate Availability Zones, you can protect your applications from the failure of a single Availability Zone
upvoted 1 times
Steve_4542636 3 months, 4 weeks ago
a subnet is per AZ. a scaling group can span multiple AZs. https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-add-availability-zone.html
upvoted 1 times
KZM 4 months ago
I think D.
Span the single subnet in both Availability Zones can access the DB instances in either zone without going over the public internet.
upvoted 2 times
KZM 4 months ago Can span like that? upvoted 1 times
leoattf 4 months ago
Nope. The answer is indeed C.
You cannot span like that. Check the link below:
"Each subnet must reside entirely within one Availability Zone and cannot span zones." https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html
upvoted 3 times
KZM 4 months ago
Thanks, Leoattf for the link you shared.
upvoted 2 times
KZM 4 months ago Sorry I think C is correct. upvoted 1 times
Babba 4 months, 1 week ago
it's C
upvoted 1 times
Question #299 Topic 1
A research laboratory needs to process approximately 8 TB of data. The laboratory requires sub-millisecond latencies and a minimum throughput of 6 GBps for the storage subsystem. Hundreds of Amazon EC2 instances that run Amazon Linux will distribute and process the data.
Which solution will meet the performance requirements?
A. Create an Amazon FSx for NetApp ONTAP file system. Sat each volume’ tiering policy to ALL. Import the raw data into the file system. Mount the fila system on the EC2 instances.
B. Create an Amazon S3 bucket to store the raw data. Create an Amazon FSx for Lustre file system that uses persistent SSD storage. Select the option to import data from and export data to Amazon S3. Mount the file system on the EC2 instances.
C. Create an Amazon S3 bucket to store the raw data. Create an Amazon FSx for Lustre file system that uses persistent HDD storage. Select the option to import data from and export data to Amazon S3. Mount the file system on the EC2 instances.
D. Create an Amazon FSx for NetApp ONTAP file system. Set each volume’s tiering policy to NONE. Import the raw data into the file system. Mount the file system on the EC2 instances.
Community vote distribution
B (100%)
Bhawesh Highly Voted 4 months, 1 week ago
Keyword here is a minimum throughput of 6 GBps. Only the FSx for Lustre with SSD option gives the sub-milli response and throughput of 6 GBps or more.
B. Create an Amazon S3 bucket to store the raw data. Create an Amazon FSx for Lustre file system that uses persistent SSD storage. Select the option to import data from and export data to Amazon S3. Mount the file system on the EC2 instances.
Refrences:
https://aws.amazon.com/fsx/when-to-choose-fsx/
upvoted 9 times
bdp123 Highly Voted 4 months, 1 week ago
Create an Amazon S3 bucket to store the raw data Create an Amazon FSx for Lustre file system that uses persistent SSD storage Select the option to import data from and export data to Amazon S3
Mount the file system on the EC2 instances. Amazon FSx for Lustre uses SSD storage for submillisecond latencies and up to 6 GBps throughput, and can import data from and export data to
Amazon S3. Additionally, the option to select persistent SSD storage will ensure that the data is stored on the disk and not lost if the file system is stopped.
upvoted 6 times
kruasan Most Recent 2 months ago
Amazon FSx for Lustre with SSD storage can provide up to 260 GB/s of aggregate throughput and sub-millisecond latencies needed for this workload.
Persistent SSD storage ensures data durability in the file system. Data is also exported to S3 for backup storage.
The file system will import the initial 8 TB of raw data from S3, providing a fast storage tier for processing while retaining the data in S3.
The file system is mounted to the EC2 compute instances to distribute processing.
FSx for Lustre is optimized for high-performance computing workloads running on Linux, matching the EC2 environment.
upvoted 1 times
kruasan 2 months ago
Option A - FSx for NetApp ONTAP with ALL tiering policy would not provide fast enough storage tier for sub-millisecond latency. HDD tiers have higher latency.
Option C - FSx for Lustre with HDD storage would not provide the throughput, IOPS or low latency needed.
Option D - FSx for NetApp ONTAP with NONE tiering policy would require much more expensive SSD storage to meet requirements, increasing cost.
upvoted 1 times
Steve_4542636 3 months, 4 weeks ago
I vote B
upvoted 1 times
AlmeroSenior 4 months ago
FSX Lusture is 1000mbs per TB provisioned and we have 8TBs so gives us 8GBs . The netapp FSX appears a hard limit of 4gbs .
https://aws.amazon.com/fsx/lustre/faqs/?nc=sn&loc=5 https://aws.amazon.com/fsx/netapp-ontap/faqs/
upvoted 3 times
LuckyAro 4 months, 1 week ago
B is the best choice as it utilizes Amazon S3 for data storage, which is cost-effective and durable, and Amazon FSx for Lustre for high-performance file storage, which provides the required sub-millisecond latencies and minimum throughput of 6 GBps. Additionally, the option to import and export data to and from Amazon S3 makes it easier to manage and move data between the two services.
B is the best option as it meets the performance requirements for sub-millisecond latencies and a minimum throughput of 6 GBps.
upvoted 1 times
everfly 4 months, 1 week ago
Amazon FSx for Lustre provides fully managed shared storage with the scalability and performance of the popular Lustre file system. It can deliver sub-millisecond latencies and hundreds of gigabytes per second of throughput.
upvoted 3 times
Question #300 Topic 1
A company needs to migrate a legacy application from an on-premises data center to the AWS Cloud because of hardware capacity constraints. The application runs 24 hours a day, 7 days a week. The application’s database storage continues to grow over time.
What should a solutions architect do to meet these requirements MOST cost-effectively?
Migrate the application layer to Amazon EC2 Spot Instances. Migrate the data storage layer to Amazon S3.
Migrate the application layer to Amazon EC2 Reserved Instances. Migrate the data storage layer to Amazon RDS On-Demand Instances.
Migrate the application layer to Amazon EC2 Reserved Instances. Migrate the data storage layer to Amazon Aurora Reserved Instances.
Migrate the application layer to Amazon EC2 On-Demand Instances. Migrate the data storage layer to Amazon RDS Reserved Instances.
Community vote distribution
C (80%) B (20%)
NolaHOla Highly Voted 4 months, 1 week ago
Option B based on the fact that the DB storage will continue to grow, so on-demand will be a more suitable solution
upvoted 7 times
NolaHOla 4 months, 1 week ago
Since the application's database storage is continuously growing over time, it may be difficult to estimate the appropriate size of the Aurora cluster in advance, which is required when reserving Aurora.
In this case, it may be more cost-effective to use Amazon RDS On-Demand Instances for the data storage layer. With RDS On-Demand Instances, you pay only for the capacity you use and you can easily scale up or down the storage as needed.
upvoted 4 times
hristni0 4 weeks ago
Answer is C. From Aurora Reserved Instances documentation:
If you have a DB instance, and you need to scale it to larger capacity, your reserved DB instance is automatically applied to your scaled DB instance. That is, your reserved DB instances are automatically applied across all DB instance class sizes. Size-flexible reserved DB instances are available for DB instances with the same AWS Region and database engine.
upvoted 1 times
Joxtat 4 months ago
The Answer is C. https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.AuroraMySQL.html
upvoted 1 times
LuckyAro Highly Voted 4 months, 1 week ago
Amazon EC2 Reserved Instances allow for significant cost savings compared to On-Demand instances for long-running, steady-state workloads like this one. Reserved Instances provide a capacity reservation, so the instances are guaranteed to be available for the duration of the reservation period.
Amazon Aurora is a highly scalable, cloud-native relational database service that is designed to be compatible with MySQL and PostgreSQL. It can automatically scale up to meet growing storage requirements, so it can accommodate the application's database storage needs over time. By using Reserved Instances for Aurora, the cost savings will be significant over the long term.
upvoted 6 times
cpen Most Recent 4 weeks, 1 day ago
nnascncnscnknkckl
upvoted 1 times
TariqKipkemei 2 months, 1 week ago
Answer is C
upvoted 1 times
QuangPham810 2 months, 1 week ago
Answer is C. Refer https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_WorkingWithReservedDBInstances.html => Size-flexible reserved DB instances
upvoted 1 times
Abhineet9148232 3 months, 3 weeks ago
C: With Aurora Serverless v2, each writer and reader has its own current capacity value, measured in ACUs. Aurora Serverless v2 scales a writer or reader up to a higher capacity when its current capacity is too low to handle the load. It scales the writer or reader down to a lower capacity when its current capacity is higher than needed.
This is sufficient to accommodate the growing data changes.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.how-it-works.html#aurora-serverless-v2.how-it-works.scaling
upvoted 1 times
Steve_4542636 3 months, 4 weeks ago
Typically Amazon RDS cost less than Aurora. But here, it's Aurora reserved.
upvoted 1 times
ACasper 3 months, 4 weeks ago
Answer C https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_WorkingWithReservedDBInstances.html Discounts for reserved DB instances are tied to instance type and AWS Region.
upvoted 1 times
AlmeroSenior 4 months ago
Both RDS and RDS aurora support Storage Auto scale .
Aurora is more expensive than base RDS , But between B and C , the Aurora is reserved instance and base RDS is on demand . Also it states the DB strorage will grow , so no concern about a bigger DB instance ( server ) , only the actual storage
upvoted 1 times
Joxtat 4 months ago
Samuel03 4 months ago
I also think it is B. Otherewise there is no point in mentionig about growing storage requirements.
upvoted 2 times
Americo32 4 months ago
A opção B com base no fato de que o armazenamento de banco de dados continuará a crescer, portanto, sob demanda será uma solução mais adequada
upvoted 1 times
Americo32 4 months ago
Mudando para opção C, Observações importantes sobre compras
Os preços de instâncias reservadas cobrem apenas os custos da instância. O armazenamento e a E/S ainda são faturados separadamente.
upvoted 1 times
ManOnTheMoon 4 months, 1 week ago
Why not B?
upvoted 3 times
skiwili 4 months, 1 week ago
Ccccccc
upvoted 2 times
Question #301 Topic 1
A university research laboratory needs to migrate 30 TB of data from an on-premises Windows file server to Amazon FSx for Windows File Server. The laboratory has a 1 Gbps network link that many other departments in the university share.
The laboratory wants to implement a data migration service that will maximize the performance of the data transfer. However, the laboratory
needs to be able to control the amount of bandwidth that the service uses to minimize the impact on other departments. The data migration must take place within the next 5 days.
Which AWS solution will meet these requirements?
AWS Snowcone
Amazon FSx File Gateway
AWS DataSync
AWS Transfer Family
Community vote distribution
C (100%)
Michal_L_95 Highly Voted 3 months, 2 weeks ago
As read a little bit, I assume that B (FSx File Gateway) requires a little bit more configuration rather than C (DataSync). From Stephane Maarek course explanation about DataSync:
An online data transfer service that simplifies, automates, and accelerates copying large amounts of data between on-premises storage systems and AWS Storage services, as well as between AWS Storage services.
You can use AWS DataSync to migrate data located on-premises, at the edge, or in other clouds to Amazon S3, Amazon EFS, Amazon FSx for Windows File Server, Amazon FSx for Lustre, Amazon FSx for OpenZFS, and Amazon FSx for NetApp ONTAP.
upvoted 5 times
jayce5 Most Recent 2 weeks, 1 day ago
"Amazon FSx File Gateway" is for storing data, not for migrating. So the answer should be C.
upvoted 1 times
kruasan 2 months ago
AWS DataSync is a data transfer service that can copy large amounts of data between on-premises storage and Amazon FSx for Windows File Server at high speeds. It allows you to control the amount of bandwidth used during data transfer.
DataSync uses agents at the source and destination to automatically copy files and file metadata over the network. This optimizes the data transfer and minimizes the impact on your network bandwidth.
DataSync allows you to schedule data transfers and configure transfer rates to suit your needs. You can transfer 30 TB within 5 days while controlling bandwidth usage.
DataSync can resume interrupted transfers and validate data to ensure integrity. It provides detailed monitoring and reporting on the progress and performance of data transfers.
upvoted 2 times
kruasan 2 months ago
Option A - AWS Snowcone is more suitable for physically transporting data when network bandwidth is limited. It would not complete the transfer within 5 days.
Option B - Amazon FSx File Gateway only provides access to files stored in Amazon FSx and does not perform the actual data migration from on-premises to FSx.
Option D - AWS Transfer Family is for transferring files over FTP, FTPS and SFTP. It may require scripting to transfer 30 TB and monitor progress, and lacks bandwidth controls.
upvoted 1 times
shanwford 2 months, 3 weeks ago
Snowcone to small and delivertime to long. With DataSync you can set bandwidth limits - so this is fine solution.
upvoted 3 times
MaxMa 2 months, 4 weeks ago
Why not B?
upvoted 1 times
AlessandraSAA 3 months, 3 weeks ago
A not possible because Snowcone it's just 8TB and it takes 4-6 business days to deliver
B why cannot be https://aws.amazon.com/storagegateway/file/fsx/?
C I don't really get this
D cannot be because not compatible - https://aws.amazon.com/aws-transfer-family/
upvoted 1 times
Steve_4542636 3 months, 4 weeks ago
Bhawesh 4 months ago
C. - DataSync is Correct.
A. Snowcone is incorrect. The question says data migration must take place within the next 5 days.AWS says: If you order, you will receive the Snowcone device in approximately 4-6 days.
upvoted 2 times
LuckyAro 4 months, 1 week ago
DataSync can be used to migrate data between on-premises Windows file servers and Amazon FSx for Windows File Server with its compatibility for Windows file systems.
The laboratory needs to migrate a large amount of data (30 TB) within a relatively short timeframe (5 days) and limit the impact on other departments' network traffic. Therefore, AWS DataSync can meet these requirements by providing fast and efficient data transfer with network throttling capability to control bandwidth usage.
upvoted 3 times
cloudbusting 4 months, 1 week ago https://docs.aws.amazon.com/datasync/latest/userguide/configure-bandwidth.html upvoted 2 times
bdp123 4 months, 1 week ago
Question #302 Topic 1
A company wants to create a mobile app that allows users to stream slow-motion video clips on their mobile devices. Currently, the app captures video clips and uploads the video clips in raw format into an Amazon S3 bucket. The app retrieves these video clips directly from the S3 bucket. However, the videos are large in their raw format.
Users are experiencing issues with buffering and playback on mobile devices. The company wants to implement solutions to maximize the performance and scalability of the app while minimizing operational overhead.
Which combination of solutions will meet these requirements? (Choose two.)
Deploy Amazon CloudFront for content delivery and caching.
Use AWS DataSync to replicate the video files across AW'S Regions in other S3 buckets.
Use Amazon Elastic Transcoder to convert the video files to more appropriate formats.
Deploy an Auto Sealing group of Amazon EC2 instances in Local Zones for content delivery and caching.
Deploy an Auto Scaling group of Amazon EC2 instances to convert the video files to more appropriate formats.
Community vote distribution
C (50%) A (50%)
Bhawesh Highly Voted 4 months, 1 week ago
For Minimum operational overhead, the 2 options A,C should be correct.
A. Deploy Amazon CloudFront for content delivery and caching.
C. Use Amazon Elastic Transcoder to convert the video files to more appropriate formats.
upvoted 10 times
enc_0343 Most Recent 1 day, 7 hours ago
AC is the correct answer
upvoted 1 times
antropaws 1 month ago
AC, the only possible answers.
upvoted 1 times
Eden 1 month, 3 weeks ago
It says choose two so I chose AC
upvoted 1 times
WherecanIstart 3 months, 2 weeks ago
A & C are the right answers.
upvoted 2 times
kampatra 3 months, 2 weeks ago
Steve_4542636 3 months, 4 weeks ago
A and C. Transcoder does exactly what this needs.
upvoted 2 times
Steve_4542636 3 months, 4 weeks ago
A and C. CloudFront hs caching for A
upvoted 1 times
wawaw3213 4 months ago
a and c
upvoted 2 times
bdp123 4 months ago
Both A and C - I was not able to choose both https://aws.amazon.com/elastictranscoder/
upvoted 2 times
Bhrino 4 months, 1 week ago
A and C bc cloud front would help the performance for content such as this and elastictranscoder makes the process from transferring devices almost seamless
upvoted 1 times
LuckyAro 4 months, 1 week ago
A & C.
A: Deploy Amazon CloudFront for content delivery and caching: Amazon CloudFront is a content delivery network (CDN) that can help improve the performance and scalability of the app by caching content at edge locations, reducing latency, and improving the delivery of video clips to users.
CloudFront can also provide features such as DDoS protection, SSL/TLS encryption, and content compression to optimize the delivery of video clips.
C: Use Amazon Elastic Transcoder to convert the video files to more appropriate formats: Amazon Elastic Transcoder is a service that can help optimize the video format for mobile devices, reducing the size of the video files, and improving the playback performance. Elastic Transcoder can also convert videos into multiple formats to support different devices and platforms.
upvoted 2 times
Babba 4 months, 1 week ago
jahmad0730 4 months, 1 week ago
Question #303 Topic 1
A company is launching a new application deployed on an Amazon Elastic Container Service (Amazon ECS) cluster and is using the Fargate launch type for ECS tasks. The company is monitoring CPU and memory usage because it is expecting high traffic to the application upon its launch. However, the company wants to reduce costs when utilization decreases.
What should a solutions architect recommend?
A. Use Amazon EC2 Auto Scaling to scale at certain periods based on previous traffic patterns.
B. Use an AWS Lambda function to scale Amazon ECS based on metric breaches that trigger an Amazon CloudWatch alarm.
C. Use Amazon EC2 Auto Scaling with simple scaling policies to scale when ECS metric breaches trigger an Amazon CloudWatch alarm.
D. Use AWS Application Auto Scaling with target tracking policies to scale when ECS metric breaches trigger an Amazon CloudWatch alarm.
Community vote distribution
D (100%)
rrharris Highly Voted 4 months, 1 week ago Answer is D - Auto-scaling with target tracking upvoted 7 times
TariqKipkemei Most Recent 1 month, 3 weeks ago
Answer is D - Application Auto Scaling is a web service for developers and system administrators who need a solution for automatically scaling their scalable resources for individual AWS services beyond Amazon EC2.
upvoted 2 times
boxu03 3 months, 2 weeks ago
Joxtat 4 months ago
https://docs.aws.amazon.com/autoscaling/application/userguide/what-is-application-auto-scaling.html
upvoted 3 times
jahmad0730 4 months, 1 week ago
Neha999 4 months, 1 week ago
D : auto-scaling with target tracking
upvoted 3 times
Question #304 Topic 1
A company recently created a disaster recovery site in a different AWS Region. The company needs to transfer large amounts of data back and forth between NFS file systems in the two Regions on a periodic basis.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS DataSync.
B. Use AWS Snowball devices.
C. Set up an SFTP server on Amazon EC2.
D. Use AWS Database Migration Service (AWS DMS).
Community vote distribution
A (100%)
LuckyAro Highly Voted 4 months, 1 week ago
AWS DataSync is a fully managed data transfer service that simplifies moving large amounts of data between on-premises storage systems and AWS services. It can also transfer data between different AWS services, including different AWS Regions. DataSync provides a simple, scalable, and automated solution to transfer data, and it minimizes the operational overhead because it is fully managed by AWS.
upvoted 8 times
kruasan Most Recent 2 months ago
AWS DataSync is a data transfer service optimized for moving large amounts of data between NFS file systems. It can automatically copy files and metadata between your NFS file systems in different AWS Regions.
DataSync requires minimal setup and management. You deploy a source and destination agent, provide the source and destination locations, and DataSync handles the actual data transfer efficiently in the background.
DataSync can schedule and monitor data transfers to keep source and destination in sync with minimal overhead. It resumes interrupted transfers and validates data integrity.
DataSync optimizes data transfer performance across AWS's network infrastructure. It can achieve high throughput with minimal impact to your operations.
upvoted 1 times
kruasan 2 months ago
Option B - AWS Snowball requires physical devices to transfer data. This incurs overhead to transport devices and manually load/unload data. It is not an online data transfer solution.
Option C - Setting up and managing an SFTP server would require provisioning EC2 instances, handling security groups, and writing scripts to automate the data transfer - all of which demand more overhead than DataSync.
Option D - AWS Database Migration Service is designed for migrating databases, not general file system data. It would require converting your NFS data into a database format, incurring additional overhead.
upvoted 1 times
ashu089 3 months ago
A only
upvoted 1 times
skiwili 4 months, 1 week ago
Aaaaaa
upvoted 1 times
NolaHOla 4 months, 1 week ago
A should be correct
upvoted 1 times
Question #305 Topic 1
A company is designing a shared storage solution for a gaming application that is hosted in the AWS Cloud. The company needs the ability to use SMB clients to access data. The solution must be fully managed.
Which AWS solution meets these requirements?
Create an AWS DataSync task that shares the data as a mountable file system. Mount the file system to the application server.
Create an Amazon EC2 Windows instance. Install and configure a Windows file share role on the instance. Connect the application server to the file share.
Create an Amazon FSx for Windows File Server file system. Attach the file system to the origin server. Connect the application server to the file system.
Create an Amazon S3 bucket. Assign an IAM role to the application to grant access to the S3 bucket. Mount the S3 bucket to the application server.
Community vote distribution
C (100%)
Neha999 Highly Voted 4 months, 1 week ago
C L: Amazon FSx for Windows File Server file system
upvoted 5 times
kruasan Most Recent 2 months ago
Amazon FSx for Windows File Server provides a fully managed native Windows file system that can be accessed using the industry-standard SMB protocol. This allows Windows clients like the gaming application to directly access file data.
FSx for Windows File Server handles time-consuming file system administration tasks like provisioning, setup, maintenance, file share management, backups, security, and software patching - reducing operational overhead.
FSx for Windows File Server supports high file system throughput, IOPS, and consistent low latencies required for performance-sensitive workloads. This makes it suitable for a gaming application.
The file system can be directly attached to EC2 instances, providing a performant shared storage solution for the gaming servers.
upvoted 1 times
kruasan 2 months ago
Option A - DataSync is for data transfer, not providing a shared file system. It cannot be mounted or directly accessed.
Option B - A self-managed EC2 file share would require manually installing, configuring and maintaining a Windows file system and share. This demands significant overhead to operate.
Option D - Amazon S3 is object storage, not a native file system. The data in S3 would need to be converted/formatted to provide file share access, adding complexity. S3 cannot be directly mounted or provide the performance of FSx.
upvoted 1 times
elearningtakai 3 months ago
Amazon FSx for Windows File Server
upvoted 1 times
Steve_4542636 3 months, 4 weeks ago
I vote C since FSx supports SMB
upvoted 1 times
LuckyAro 4 months, 1 week ago
AWS FSx for Windows File Server is a fully managed native Microsoft Windows file system that is accessible through the SMB protocol. It provides features such as file system backups, integrated with Amazon S3, and Active Directory integration for user authentication and access control. This solution allows for the use of SMB clients to access the data and is fully managed, eliminating the need for the company to manage the underlying infrastructure.
upvoted 2 times
Babba 4 months, 1 week ago
rrharris 4 months, 1 week ago
Answer is C - SMB = storage gateway or FSx
upvoted 4 times
Question #306 Topic 1
A company wants to run an in-memory database for a latency-sensitive application that runs on Amazon EC2 instances. The application
processes more than 100,000 transactions each minute and requires high network throughput. A solutions architect needs to provide a cost-effective network design that minimizes data transfer charges.
Which solution meets these requirements?
Launch all EC2 instances in the same Availability Zone within the same AWS Region. Specify a placement group with cluster strategy when launching EC2 instances.
Launch all EC2 instances in different Availability Zones within the same AWS Region. Specify a placement group with partition strategy when launching EC2 instances.
Deploy an Auto Scaling group to launch EC2 instances in different Availability Zones based on a network utilization target.
Deploy an Auto Scaling group with a step scaling policy to launch EC2 instances in different Availability Zones.
Community vote distribution
A (100%)
kruasan 2 months ago
Reasons:
Launching instances within a single AZ and using a cluster placement group provides the lowest network latency and highest bandwidth between instances. This maximizes performance for an in-memory database and high-throughput application.
Communications between instances in the same AZ and placement group are free, minimizing data transfer charges. Inter-AZ and public IP traffic can incur charges.
A cluster placement group enables the instances to be placed close together within the AZ, allowing the high network throughput required. Partition groups span AZs, reducing bandwidth.
Auto Scaling across zones could launch instances in AZs that increase data transfer charges. It may reduce network throughput, impacting performance.
upvoted 3 times
kruasan 2 months ago
In contrast:
Option B - A partition placement group spans AZs, reducing network bandwidth between instances and potentially increasing costs.
Option C - Auto Scaling alone does not guarantee the network throughput and cost controls required for this use case. Launching across AZs could increase data transfer charges.
Option D - Step scaling policies determine how many instances to launch based on metrics alone. They lack control over network connectivity and costs between instances after launch.
upvoted 2 times
NoinNothing 2 months, 2 weeks ago
Cluster - have low latency if its in same AZ and same region so Answer is "A"
upvoted 2 times
BeeKayEnn 2 months, 3 weeks ago
Answer would be A - As part of selecting all the EC2 instances in the same availability zone, they all will be within the same DC and logically the latency will be very less as compared to the other Availability Zones..
As all the autoscaling nodes will also be on the same availability zones, (as per Placement groups with Cluster mode), this would provide the low-latency network performance
Reference is below: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
upvoted 2 times
[Removed] 2 months, 4 weeks ago
A - Low latency, high net throughput
upvoted 1 times
elearningtakai 3 months ago
A placement group is a logical grouping of instances within a single Availability Zone, and it provides low-latency network connectivity between instances. By launching all EC2 instances in the same Availability Zone and specifying a placement group with cluster strategy, the application can
take advantage of the high network throughput and low latency network connectivity that placement groups provide.
upvoted 1 times
Steve_4542636 3 months, 4 weeks ago
Cluster placement groups improves throughput between the instances which means less EC2 instances would be needed thus reducing costs.
upvoted 1 times
maciekmaciek 4 months ago
A because Specify a placement group
upvoted 1 times
KZM 4 months ago
It is option A:
To achieve low latency, high throughput, and cost-effectiveness, the optimal solution is to launch EC2 instances as a placement group with the cluster strategy within the same Availability Zone.
upvoted 2 times
ManOnTheMoon 4 months ago
Why not C?
upvoted 1 times
Steve_4542636 3 months, 4 weeks ago
You're thinking operational efficiency. The question asks for cost reduction.
upvoted 2 times
rrharris 4 months, 1 week ago Answer is A - Clustering upvoted 2 times
Neha999 4 months, 1 week ago A : Cluster placement group upvoted 4 times
Question #307 Topic 1
A company that primarily runs its application servers on premises has decided to migrate to AWS. The company wants to minimize its need to
scale its Internet Small Computer Systems Interface (iSCSI) storage on premises. The company wants only its recently accessed data to remain stored locally.
Which AWS solution should the company use to meet these requirements?
Amazon S3 File Gateway
AWS Storage Gateway Tape Gateway
AWS Storage Gateway Volume Gateway stored volumes
AWS Storage Gateway Volume Gateway cached volumes
Community vote distribution
D (100%)
LuckyAro Highly Voted 4 months ago
AWS Storage Gateway Volume Gateway provides two configurations for connecting to iSCSI storage, namely, stored volumes and cached volumes. The stored volume configuration stores the entire data set on-premises and asynchronously backs up the data to AWS. The cached volume configuration stores recently accessed data on-premises, and the remaining data is stored in Amazon S3.
Since the company wants only its recently accessed data to remain stored locally, the cached volume configuration would be the most appropriate. It allows the company to keep frequently accessed data on-premises and reduce the need for scaling its iSCSI storage while still providing access to all data through the AWS cloud. This configuration also provides low-latency access to frequently accessed data and cost-effective off-site backups for less frequently accessed data.
upvoted 18 times
smgsi Highly Voted 4 months, 1 week ago
https://docs.amazonaws.cn/en_us/storagegateway/latest/vgw/StorageGatewayConcepts.html#storage-gateway-cached-concepts
upvoted 6 times
kruasan Most Recent 2 months ago
Volume Gateway cached volumes store entire datasets on S3, while keeping a portion of recently accessed data on your local storage as a cache. This meets the goal of minimizing on-premises storage needs while keeping hot data local.
The cache provides low-latency access to your frequently accessed data, while long-term retention of the entire dataset is provided durable and cost-effective in S3.
You get virtually unlimited storage on S3 for your infrequently accessed data, while controlling the amount of local storage used for cache. This simplifies on-premises storage scaling.
Volume Gateway cached volumes support iSCSI connections from on-premises application servers, allowing a seamless migration experience. Servers access local cache and S3 storage volumes as iSCSI LUNs.
upvoted 3 times
kruasan 2 months ago
In contrast:
Option A - S3 File Gateway only provides file interfaces (NFS/SMB) to data in S3. It does not support block storage or cache recently accessed data locally.
Option B - Tape Gateway is designed for long-term backup and archiving to virtual tape cartridges on S3. It does not provide primary storage volumes or local cache for low-latency access.
Option C - Volume Gateway stored volumes keep entire datasets locally, then asynchronously back them up to S3. This does not meet the goal of minimizing on-premises storage needs.
upvoted 2 times
Steve_4542636 3 months, 4 weeks ago
ManOnTheMoon 4 months ago
Agree with D
upvoted 1 times
Babba 4 months, 1 week ago
recently accessed data to remain stored locally - cached
upvoted 2 times
Bhawesh 4 months, 1 week ago
D. AWS Storage Gateway Volume Gateway cached volumes
upvoted 3 times
bdp123 4 months, 1 week ago
recently accessed data to remain stored locally - cached
upvoted 3 times
Question #308 Topic 1
A company has multiple AWS accounts that use consolidated billing. The company runs several active high performance Amazon RDS for Oracle On-Demand DB instances for 90 days. The company’s finance team has access to AWS Trusted Advisor in the consolidated billing account and all other AWS accounts.
The finance team needs to use the appropriate AWS account to access the Trusted Advisor check recommendations for RDS. The finance team must review the appropriate Trusted Advisor check to reduce RDS costs.
Which combination of steps should the finance team take to meet these requirements? (Choose two.)
Use the Trusted Advisor recommendations from the account where the RDS instances are running.
Use the Trusted Advisor recommendations from the consolidated billing account to see all RDS instance checks at the same time.
Review the Trusted Advisor check for Amazon RDS Reserved Instance Optimization.
Review the Trusted Advisor check for Amazon RDS Idle DB Instances.
Review the Trusted Advisor check for Amazon Redshift Reserved Node Optimization.
Community vote distribution
BD (81%) BC (19%)
Nietzsche82 Highly Voted 4 months, 1 week ago
B & D
https://aws.amazon.com/premiumsupport/knowledge-center/trusted-advisor-cost-optimization/
upvoted 10 times
MrAWSAssociate Most Recent 1 week ago
kruasan 2 months ago
https://docs.aws.amazon.com/awssupport/latest/user/organizational-view.html https://docs.aws.amazon.com/awssupport/latest/user/cost-optimization-checks.html#amazon-rds-idle-dbs-instances
upvoted 1 times
ErfanKh 2 months, 2 weeks ago
I think BC and ChatGPT as well
upvoted 1 times
kraken21 2 months, 4 weeks ago
B and D
upvoted 1 times
Russs99 3 months ago
Option A is not necessary, as the Trusted Advisor recommendations can be accessed from the consolidated billing account. Option D is not relevant, as the check for idle DB instances is not specific to RDS instances. Option E is for Amazon Redshift, not RDS, and is therefore not relevant.
upvoted 1 times
kruasan 2 months ago
it is
Amazon RDS Idle DB Instances Description
Checks the configuration of your Amazon Relational Database Service (Amazon RDS) for any database (DB) instances that appear to be idle.
If a DB instance has not had a connection for a prolonged period of time, you can delete the instance to reduce costs. A DB instance is considered idle if the instance hasn't had a connection in the past 7 days. If persistent storage is needed for data on the instance, you can use
lower-cost options such as taking and retaining a DB snapshot. Manually created DB snapshots are retained until you delete them. https://docs.aws.amazon.com/awssupport/latest/user/cost-optimization-checks.html#amazon-rds-idle-dbs-instances
upvoted 1 times
Steve_4542636 3 months, 4 weeks ago
I got with B and D
upvoted 2 times
I would go with B and C as the company is running for 90 days and C option is basing on 30 days report which would mean that there is higher potential on cost saving rather than on idle instances
upvoted 2 times
Steve_4542636 3 months, 4 weeks ago
C is stating "Reserved Instances" The question states they are using On Demand Instances. Reserved instances are reserved for less money for 1 or 3 years.
upvoted 5 times
In the scenario it says for 90 days, therefore the correct answer is D No C
upvoted 1 times
Michal_L_95 3 months, 2 weeks ago
Once read the question again, I agree with you.
upvoted 1 times
reduce costs - delete idle instances
https://aws.amazon.com/premiumsupport/knowledge-center/trusted-advisor-cost-optimization/
upvoted 3 times
This same URL also says that there is an option which recommends the purchase of reserved noes. So I think that C is the option instead of D, because since they already use on-demand DB instances, most probably that there will not have iddle instances. But if we replace them by reserved ones, we indeed can have some costs savings.
What are your thought on it?
upvoted 1 times
B. Use the Trusted Advisor recommendations from the consolidated billing account to see all RDS instance checks at the same time. This option allows the finance team to see all RDS instance checks across all AWS accounts in one place. Since the company uses consolidated billing, this account will have access to all of the AWS accounts' Trusted Advisor recommendations.
C. Review the Trusted Advisor check for Amazon RDS Reserved Instance Optimization. This check can help identify cost savings opportunities for RDS by identifying instances that can be covered by Reserved Instances. This can result in significant savings on RDS costs.
upvoted 1 times
I also think it is B and C. I think that C is the option instead of D, because since they already use on-demand DB instances, most probably there will not have idle instances. But if we replace them by reserved ones, we indeed can have some costs savings.
upvoted 1 times
Option A is not recommended because the finance team may not have access to the AWS account where the RDS instances are running. Even if they have access, it may not be practical to check each individual account for Trusted Advisor recommendations.
Option D is not the best choice because it only addresses the issue of idle instances and may not provide the most effective recommendations to reduce RDS costs.
Option E is not relevant to this scenario since it is related to Amazon Redshift, not RDS.
upvoted 1 times
jennyka76 4 months, 1 week ago
B & D
https://aws.amazon.com/premiumsupport/knowledge-center/trusted-advisor-cost-optimization/
upvoted 2 times
B and D I believe
upvoted 4 times
Question #309 Topic 1
A solutions architect needs to optimize storage costs. The solutions architect must identify any Amazon S3 buckets that are no longer being accessed or are rarely accessed.
Which solution will accomplish this goal with the LEAST operational overhead?
Analyze bucket access patterns by using the S3 Storage Lens dashboard for advanced activity metrics.
Analyze bucket access patterns by using the S3 dashboard in the AWS Management Console.
Turn on the Amazon CloudWatch BucketSizeBytes metric for buckets. Analyze bucket access patterns by using the metrics data with Amazon Athena.
Turn on AWS CloudTrail for S3 object monitoring. Analyze bucket access patterns by using CloudTrail logs that are integrated with Amazon CloudWatch Logs.
Community vote distribution
A (100%)
kpato87 Highly Voted 4 months, 1 week ago
S3 Storage Lens is a fully managed S3 storage analytics solution that provides a comprehensive view of object storage usage, activity trends, and recommendations to optimize costs. Storage Lens allows you to analyze object access patterns across all of your S3 buckets and generate detailed metrics and reports.
upvoted 6 times
kruasan Most Recent 2 months ago
The S3 Storage Lens dashboard provides visibility into storage metrics and activity patterns to help optimize storage costs. It shows metrics like objects added, objects deleted, storage consumed, and requests. It can filter by bucket, prefix, and tag to analyze specific subsets of data
upvoted 1 times
kruasan 2 months ago
The standard S3 console dashboard provides basic info but would require manually analyzing metrics for each bucket. This does not scale well and requires significant overhead.
Turning on the BucketSizeBytes metric and analyzing the data in Athena may provide insights but would require enabling metrics, building Athena queries, and analyzing the results. This requires more operational effort than option A.
Enabling CloudTrail logging and monitoring the logs in CloudWatch Logs could provide access pattern data but would require setting up CloudTrail, monitoring the logs, and analyzing the relevant info. This option has the highest operational overhead
upvoted 1 times
bdp123 4 months ago
LuckyAro 4 months ago
S3 Storage Lens provides a dashboard with advanced activity metrics that enable the identification of infrequently accessed and unused buckets. This can help a solutions architect optimize storage costs without incurring additional operational overhead.
upvoted 3 times
Babba 4 months, 1 week ago
Question #310 Topic 1
A company sells datasets to customers who do research in artificial intelligence and machine learning (AI/ML). The datasets are large, formatted files that are stored in an Amazon S3 bucket in the us-east-1 Region. The company hosts a web application that the customers use to purchase
access to a given dataset. The web application is deployed on multiple Amazon EC2 instances behind an Application Load Balancer. After a purchase is made, customers receive an S3 signed URL that allows access to the files.
The customers are distributed across North America and Europe. The company wants to reduce the cost that is associated with data transfers and wants to maintain or improve performance.
What should a solutions architect do to meet these requirements?
A. Configure S3 Transfer Acceleration on the existing S3 bucket. Direct customer requests to the S3 Transfer Acceleration endpoint. Continue to use S3 signed URLs for access control.
B. Deploy an Amazon CloudFront distribution with the existing S3 bucket as the origin. Direct customer requests to the CloudFront URL. Switch to CloudFront signed URLs for access control.
C. Set up a second S3 bucket in the eu-central-1 Region with S3 Cross-Region Replication between the buckets. Direct customer requests to the closest Region. Continue to use S3 signed URLs for access control.
D. Modify the web application to enable streaming of the datasets to end users. Configure the web application to read the data from the existing S3 bucket. Implement access control directly in the application.
Community vote distribution
B (100%)
LuckyAro Highly Voted 4 months ago
To reduce the cost associated with data transfers and maintain or improve performance, a solutions architect should use Amazon CloudFront, a content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency and high transfer speeds.
Deploying a CloudFront distribution with the existing S3 bucket as the origin will allow the company to serve the data to customers from edge locations that are closer to them, reducing data transfer costs and improving performance.
Directing customer requests to the CloudFront URL and switching to CloudFront signed URLs for access control will enable customers to access the data securely and efficiently.
upvoted 7 times
bdp123 Most Recent 4 months, 1 week ago
Bhawesh 4 months, 1 week ago
B. Deploy an Amazon CloudFront distribution with the existing S3 bucket as the origin. Direct customer requests to the CloudFront URL. Switch to CloudFront signed URLs for access control.
upvoted 2 times
Question #311 Topic 1
A company is using AWS to design a web application that will process insurance quotes. Users will request quotes from the application. Quotes must be separated by quote type, must be responded to within 24 hours, and must not get lost. The solution must maximize operational efficiency and must minimize maintenance.
Which solution meets these requirements?
A. Create multiple Amazon Kinesis data streams based on the quote type. Configure the web application to send messages to the proper data stream. Configure each backend group of application servers to use the Kinesis Client Library (KCL) to pool messages from its own data
stream.
B. Create an AWS Lambda function and an Amazon Simple Notification Service (Amazon SNS) topic for each quote type. Subscribe the Lambda function to its associated SNS topic. Configure the application to publish requests for quotes to the appropriate SNS topic.
C. Create a single Amazon Simple Notification Service (Amazon SNS) topic. Subscribe Amazon Simple Queue Service (Amazon SQS) queues to the SNS topic. Configure SNS message filtering to publish messages to the proper SQS queue based on the quote type. Configure each backend application server to use its own SQS queue.
D. Create multiple Amazon Kinesis Data Firehose delivery streams based on the quote type to deliver data streams to an Amazon OpenSearch Service cluster. Configure the application to send messages to the proper delivery stream. Configure each backend group of application
servers to search for the messages from OpenSearch Service and process them accordingly.
Community vote distribution
C (100%)
VIad Highly Voted 4 months, 1 week ago
C is the best option
upvoted 7 times
Yechi Highly Voted 4 months, 1 week ago
https://aws.amazon.com/getting-started/hands-on/filter-messages-published-to-topics/
upvoted 6 times
lexotan Most Recent 2 months, 1 week ago
This wrong answers from examtopic are getting me so frustrated. Which one is the correct answer then?
upvoted 3 times
Steve_4542636 3 months, 4 weeks ago
This is the SNS fan-out technique where you will have one SNS service to many SQS services https://docs.aws.amazon.com/sns/latest/dg/sns-sqs-as-subscriber.html
upvoted 5 times
UnluckyDucky 3 months, 1 week ago
SNS Fan-out fans message to all subscribers, this uses SNS filtering to publish the message only to the right SQS queue (not all of them).
upvoted 1 times
LuckyAro 4 months ago
Quote types need to be separated: SNS message filtering can be used to publish messages to the appropriate SQS queue based on the quote type, ensuring that quotes are separated by type.
Quotes must be responded to within 24 hours and must not get lost: SQS provides reliable and scalable queuing for messages, ensuring that quotes will not get lost and can be processed in a timely manner. Additionally, each backend application server can use its own SQS queue, ensuring that quotes are processed efficiently without any delay.
Operational efficiency and minimizing maintenance: Using a single SNS topic and multiple SQS queues is a scalable and cost-effective approach, which can help to maximize operational efficiency and minimize maintenance. Additionally, SNS and SQS are fully managed services, which means that the company will not need to worry about maintenance tasks such as software updates, hardware upgrades, or scaling the infrastructure.
upvoted 6 times
Question #312 Topic 1
A company has an application that runs on several Amazon EC2 instances. Each EC2 instance has multiple Amazon Elastic Block Store (Amazon EBS) data volumes attached to it. The application’s EC2 instance configuration and data need to be backed up nightly. The application also needs to be recoverable in a different AWS Region.
Which solution will meet these requirements in the MOST operationally efficient way?
A. Write an AWS Lambda function that schedules nightly snapshots of the application’s EBS volumes and copies the snapshots to a different Region.
B. Create a backup plan by using AWS Backup to perform nightly backups. Copy the backups to another Region. Add the application’s EC2 instances as resources.
C. Create a backup plan by using AWS Backup to perform nightly backups. Copy the backups to another Region. Add the application’s EBS volumes as resources.
D. Write an AWS Lambda function that schedules nightly snapshots of the application's EBS volumes and copies the snapshots to a different Availability Zone.
Community vote distribution
B (94%) 6%
TungPham Highly Voted 4 months ago
https://aws.amazon.com/vi/blogs/aws/aws-backup-ec2-instances-efs-single-file-restore-and-cross-region-backup/
When you back up an EC2 instance, AWS Backup will protect all EBS volumes attached to the instance, and it will attach them to an AMI that stores all parameters from the original EC2 instance except for two
upvoted 9 times
khasport Highly Voted 4 months ago
B is answer so the requirement is "The application’s EC2 instance configuration and data need to be backed up nightly" so we need "add the application’s EC2 instances as resources". This option will backup both EC2 configuration and data
upvoted 8 times
Geekboii Most Recent 2 months, 4 weeks ago
i would say B
upvoted 1 times
Geekboii 2 months, 4 weeks ago
i would say B
upvoted 1 times
AlmeroSenior 4 months ago
AWS KB states if you select the EC2 instance , associated EBS's will be auto covered .
https://aws.amazon.com/blogs/aws/aws-backup-ec2-instances-efs-single-file-restore-and-cross-region-backup/
upvoted 2 times
LuckyAro 4 months ago
B is the most appropriate solution because it allows you to create a backup plan to automate the backup process of EC2 instances and EBS volumes, and copy backups to another region. Additionally, you can add the application's EC2 instances as resources to ensure their configuration and data are backed up nightly.
A and D involve writing custom Lambda functions to automate the snapshot process, which can be complex and require more maintenance effort. Moreover, these options do not provide an integrated solution for managing backups and recovery, and copying snapshots to another region.
Option C involves creating a backup plan with AWS Backup to perform backups for EBS volumes only. This approach would not back up the EC2 instances and their configuration
upvoted 2 times
everfly 4 months, 1 week ago
The application’s EC2 instance configuration and data are stored on EBS volume right?
upvoted 1 times
Rehan33 4 months, 1 week ago
The data is store on EBS volume so why we are not using EBS as a source instead of EC2
upvoted 1 times
obatunde 4 months, 1 week ago
Because "The application’s EC2 instance configuration and data need to be backed up nightly"
upvoted 3 times
fulingyu288 4 months, 1 week ago
Use AWS Backup to create a backup plan that includes the EC2 instances, Amazon EBS snapshots, and any other resources needed for recovery. The backup plan can be configured to run on a nightly schedule.
upvoted 1 times
zTopic 4 months, 1 week ago
The application’s EC2 instance configuration and data need to be backed up nightly >> B
upvoted 1 times
NolaHOla 4 months, 1 week ago
But isn't the data needed to be backed up on the EBS ?
upvoted 1 times
Question #313 Topic 1
A company is building a mobile app on AWS. The company wants to expand its reach to millions of users. The company needs to build a platform so that authorized users can watch the company’s content on their mobile devices.
What should a solutions architect recommend to meet these requirements?
Publish content to a public Amazon S3 bucket. Use AWS Key Management Service (AWS KMS) keys to stream content.
Set up IPsec VPN between the mobile app and the AWS environment to stream content.
Use Amazon CloudFront. Provide signed URLs to stream content.
Set up AWS Client VPN between the mobile app and the AWS environment to stream content.
Community vote distribution
C (100%)
Steve_4542636 Highly Voted 3 months, 3 weeks ago
Enough with CloudFront already.
upvoted 14 times
TariqKipkemei 1 month, 3 weeks ago Hahaha..cloudfront too hyped :) upvoted 1 times
antropaws Most Recent 1 month ago
kprakashbehera 3 months, 3 weeks ago Cloudfront is the correct solution. upvoted 2 times
datz 3 months, 1 week ago Feel your pain :D hahaha upvoted 2 times
LuckyAro 4 months ago
Amazon CloudFront is a content delivery network (CDN) that securely delivers data, videos, applications, and APIs to customers globally with low latency and high transfer speeds. CloudFront supports signed URLs that provide authorized access to your content. This feature allows the company to control who can access their content and for how long, providing a secure and scalable solution for millions of users.
upvoted 3 times
jennyka76 4 months, 1 week ago
C
https://www.amazonaws.cn/en/cloudfront/
upvoted 1 times
Question #314 Topic 1
A company has an on-premises MySQL database used by the global sales team with infrequent access patterns. The sales team requires the database to have minimal downtime. A database administrator wants to migrate this database to AWS without selecting a particular instance type in anticipation of more users in the future.
Which service should a solutions architect recommend?
Amazon Aurora MySQL
Amazon Aurora Serverless for MySQL
Amazon Redshift Spectrum
Amazon RDS for MySQL
Community vote distribution
B (100%)
cloudbusting Highly Voted 4 months, 1 week ago "without selecting a particular instance type" = serverless upvoted 13 times
elearningtakai Most Recent 3 months ago
With Aurora Serverless for MySQL, you don't need to select a particular instance type, as the service automatically scales up or down based on the application's needs.
upvoted 3 times
Srikanth0057 3 months, 3 weeks ago
Steve_4542636 3 months, 3 weeks ago
LuckyAro 4 months ago
Amazon Aurora Serverless for MySQL is a fully managed, auto-scaling relational database service that scales up or down automatically based on the application demand. This service provides all the capabilities of Amazon Aurora, such as high availability, durability, and security, without requiring the customer to provision any database instances.
With Amazon Aurora Serverless for MySQL, the sales team can enjoy minimal downtime since the database is designed to automatically scale to accommodate the increased traffic. Additionally, the service allows the customer to pay only for the capacity used, making it cost-effective for infrequent access patterns.
Amazon RDS for MySQL could also be an option, but it requires the customer to select an instance type, and the database administrator would need to monitor and adjust the instance size manually to accommodate the increasing traffic.
upvoted 2 times
Drayen25 4 months, 1 week ago
Minimal downtime points directly to Aurora Serverless
upvoted 2 times
Question #315 Topic 1
A company experienced a breach that affected several applications in its on-premises data center. The attacker took advantage of vulnerabilities in the custom applications that were running on the servers. The company is now migrating its applications to run on Amazon EC2 instances. The company wants to implement a solution that actively scans for vulnerabilities on the EC2 instances and sends a report that details the findings.
Which solution will meet these requirements?
Deploy AWS Shield to scan the EC2 instances for vulnerabilities. Create an AWS Lambda function to log any findings to AWS CloudTrail.
Deploy Amazon Macie and AWS Lambda functions to scan the EC2 instances for vulnerabilities. Log any findings to AWS CloudTrail.
Turn on Amazon GuardDuty. Deploy the GuardDuty agents to the EC2 instances. Configure an AWS Lambda function to automate the generation and distribution of reports that detail the findings.
Turn on Amazon Inspector. Deploy the Amazon Inspector agent to the EC2 instances. Configure an AWS Lambda function to automate the generation and distribution of reports that detail the findings.
Community vote distribution
D (94%) 6%
siyam008 Highly Voted 3 months, 3 weeks ago
AWS Shield for DDOS
Amazon Macie for discover and protect sensitive date
Amazon GuardDuty for intelligent thread discovery to protect AWS account Amazon Inspector for automated security assessment. like known Vulnerability
upvoted 20 times
kruasan Most Recent 2 months ago
Amazon Inspector:
Performs active vulnerability scans of EC2 instances. It looks for software vulnerabilities, unintended network accessibility, and other security issues.
Requires installing an agent on EC2 instances to perform scans. The agent must be deployed to each instance.
Provides scheduled scan reports detailing any findings of security risks or vulnerabilities. These reports can be used to patch or remediate issues.
Is best suited for proactively detecting security weaknesses and misconfigurations in your AWS environment.
upvoted 2 times
kruasan 2 months ago
Amazon GuardDuty:
Monitors for malicious activity like unusual API calls, unauthorized infrastructure deployments, or compromised EC2 instances. It uses machine learning and behavioral analysis of logs.
Does not require installing any agents. It relies on analyzing AWS CloudTrail, VPC Flow Logs, and DNS logs.
Alerts you to any detected threats, suspicious activity or policy violations in your AWS accounts. These alerts warrant investigation but may not always require remediation.
Is focused on detecting active threats, unauthorized behavior, and signs of a compromise in your AWS environment.
Can also detect some vulnerabilities and misconfigurations but coverage is not as broad as a dedicated service like Inspector.
upvoted 2 times
datz 3 months, 1 week ago
Amazon Inspector is a vulnerability scanning tool that you can use to identify potential security issues within your EC2 instances.
It is a kind of automated security assessment service that checks the network exposure of your EC2 or latest security state for applications running into your EC2 instance. It has ability to auto discover your AWS workload and continuously scan for the open loophole or vulnerability.
upvoted 1 times
shanwford 3 months, 1 week ago
Amazon Inspector is a vulnerability scanning tool that you can use to identify potential security issues within your EC2 instances. Guard Duty continuously monitors your entire AWS account via Cloud Trail, Flow Logs, DNS Logs as Input.
upvoted 1 times
GalileoEC2 3 months, 1 week ago
:) C is the correct
https://cloudkatha.com/amazon-guardduty-vs-inspector-which-one-should-you-use/
upvoted 1 times
MssP 3 months ago
Please, read the link you sent: Amazon Inspector is a vulnerability scanning tool that you can use to identify potential security issues within your EC2 instances. GuardDuty is very critical part to identify threats, based on that findings you can setup automated preventive actions or remediation’s. So Answer is D.
upvoted 1 times
GalileoEC2 3 months, 1 week ago
https://cloudkatha.com/amazon-guardduty-vs-inspector-which-one-should-you-use/
upvoted 1 times
LuckyAro 4 months ago
Amazon Inspector is a security assessment service that helps to identify security vulnerabilities and compliance issues in applications deployed on Amazon EC2 instances. It can be used to assess the security of applications that are deployed on Amazon EC2 instances, including those that are custom-built.
To use Amazon Inspector, the Amazon Inspector agent must be installed on the EC2 instances that need to be assessed. The agent collects data about the instances and sends it to Amazon Inspector for analysis. Amazon Inspector then generates a report that details any security vulnerabilities that were found and provides guidance on how to remediate them.
By configuring an AWS Lambda function, the company can automate the generation and distribution of reports that detail the findings. This means that reports can be generated and distributed as soon as vulnerabilities are detected, allowing the company to take action quickly.
upvoted 1 times
pbpally 4 months, 1 week ago
I'm a little confused on how someone came up with C, it is definitely D.
upvoted 1 times
obatunde 4 months, 1 week ago
obatunde 4 months, 1 week ago
Amazon Inspector is an automated vulnerability management service that continually scans AWS workloads for software vulnerabilities and unintended network exposure. https://aws.amazon.com/inspector/features/?nc=sn&loc=2
upvoted 3 times
Palanda 4 months, 1 week ago
minglu 4 months, 1 week ago
skiwili 4 months, 1 week ago
cloudbusting 4 months, 1 week ago
this is inspector = https://medium.com/aws-architech/use-case-aws-inspector-vs-guardduty-3662bf80767a
upvoted 3 times
Question #316 Topic 1
A company uses an Amazon EC2 instance to run a script to poll for and process messages in an Amazon Simple Queue Service (Amazon SQS) queue. The company wants to reduce operational costs while maintaining its ability to process a growing number of messages that are added to the queue.
What should a solutions architect recommend to meet these requirements?
A. Increase the size of the EC2 instance to process messages faster.
B. Use Amazon EventBridge to turn off the EC2 instance when the instance is underutilized.
C. Migrate the script on the EC2 instance to an AWS Lambda function with the appropriate runtime.
D. Use AWS Systems Manager Run Command to run the script on demand.
Community vote distribution
C (85%) D (15%)
kpato87 Highly Voted 4 months, 1 week ago
By migrating the script to AWS Lambda, the company can take advantage of the auto-scaling feature of the service. AWS Lambda will automatically scale resources to match the size of the workload. This means that the company will not have to worry about provisioning or managing instances as the number of messages increases, resulting in lower operational costs
upvoted 5 times
Steve_4542636 Most Recent 3 months, 3 weeks ago
Lambda costs money only when it's processing, not when idle
upvoted 2 times
ManOnTheMoon 4 months ago
Agree with C
upvoted 1 times
khasport 4 months ago
the answer is C. With this option, you can reduce operational cost as the question mentioned
upvoted 1 times
LuckyAro 4 months ago
AWS Lambda is a serverless compute service that allows you to run your code without provisioning or managing servers. By migrating the script to an AWS Lambda function, you can eliminate the need to maintain an EC2 instance, reducing operational costs. Additionally, Lambda automatically scales to handle the increasing number of messages in the SQS queue.
upvoted 1 times
zTopic 4 months, 1 week ago
It Should be C.
Lambda allows you to execute code without provisioning or managing servers, so it is ideal for running scripts that poll for and process messages in an Amazon SQS queue. The scaling of the Lambda function is automatic, and you only pay for the actual time it takes to process the messages.
upvoted 3 times
Bhawesh 4 months, 1 week ago
To reduce the operational overhead, it should be:
D. Use AWS Systems Manager Run Command to run the script on demand.
upvoted 2 times
lucdt4 1 month, 1 week ago
No, replace EC2 instead by using lambda to reduce costs
upvoted 1 times
Question #317 Topic 1
A company uses a legacy application to produce data in CSV format. The legacy application stores the output data in Amazon S3. The company is deploying a new commercial off-the-shelf (COTS) application that can perform complex SQL queries to analyze data that is stored in Amazon
Redshift and Amazon S3 only. However, the COTS application cannot process the .csv files that the legacy application produces.
The company cannot update the legacy application to produce data in another format. The company needs to implement a solution so that the COTS application can use the data that the legacy application produces.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an AWS Glue extract, transform, and load (ETL) job that runs on a schedule. Configure the ETL job to process the .csv files and store the processed data in Amazon Redshift.
B. Develop a Python script that runs on Amazon EC2 instances to convert the .csv files to .sql files. Invoke the Python script on a cron schedule to store the output files in Amazon S3.
C. Create an AWS Lambda function and an Amazon DynamoDB table. Use an S3 event to invoke the Lambda function. Configure the Lambda function to perform an extract, transform, and load (ETL) job to process the .csv files and store the processed data in the DynamoDB table.
D. Use Amazon EventBridge to launch an Amazon EMR cluster on a weekly schedule. Configure the EMR cluster to perform an extract, transform, and load (ETL) job to process the .csv files and store the processed data in an Amazon Redshift table.
Community vote distribution
A (89%) 11%
kraken21 2 months, 4 weeks ago
Glue is server less and has less operational head than EMR so A.
upvoted 1 times
elearningtakai 3 months ago
A, AWS Glue is a fully managed ETL service that can extract data from various sources, transform it into the required format, and load it into a target data store. In this case, the ETL job can be configured to read the CSV files from Amazon S3, transform the data into a format that can be loaded into Amazon Redshift, and load it into an Amazon Redshift table.
B requires the development of a custom script to convert the CSV files to SQL files, which could be time-consuming and introduce additional operational overhead. C, while using serverless technology, requires the additional use of DynamoDB to store the processed data, which may not be necessary if the data is only needed in Amazon Redshift. D, while an option, is not the most efficient solution as it requires the creation of an EMR cluster, which can be costly and complex to manage.
upvoted 4 times
dcp 3 months, 1 week ago
o meet the requirement with the least operational overhead, a serverless approach should be used. Among the options provided, option C provides a serverless solution using AWS Lambda, S3, and DynamoDB. Therefore, the solution should be to create an AWS Lambda function and an Amazon DynamoDB table. Use an S3 event to invoke the Lambda function. Configure the Lambda function to perform an extract, transform, and load (ETL) job to process the .csv files and store the processed data in the DynamoDB table.
Option A is also a valid solution, but it may involve more operational overhead than Option C. With Option A, you would need to set up and manage an AWS Glue job, which would require more setup time than creating an AWS Lambda function. Additionally, AWS Glue jobs have a minimum execution time of 10 minutes, which may not be necessary or desirable for this use case. However, if the data processing is particularly complex or requires a lot of data transformation, AWS Glue may be a more appropriate solution.
upvoted 1 times
MssP 3 months ago
Important point: The COTS performs complex SQL queries to analyze data in Amazon Redshift. If you use DynamoDB -> No SQL querires. Option A makes more sense.
upvoted 3 times
LuckyAro 4 months ago
A would be the best solution as it involves the least operational overhead. With this solution, an AWS Glue ETL job is created to process the .csv files and store the processed data directly in Amazon Redshift. This is a serverless approach that does not require any infrastructure to be provisioned, configured, or maintained. AWS Glue provides a fully managed, pay-as-you-go ETL service that can be easily configured to process data from S3 and load it into Amazon Redshift. This approach allows the legacy application to continue to produce data in the CSV format that it currently uses, while providing the new COTS application with the ability to analyze the data using complex SQL queries.
upvoted 3 times
jennyka76 4 months, 1 week ago
A
https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-format-csv-home.html I AGREE AFTER READING LINK
upvoted 1 times
cloudbusting 4 months, 1 week ago
A: https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-format.html
upvoted 1 times
Question #318 Topic 1
A company recently migrated its entire IT environment to the AWS Cloud. The company discovers that users are provisioning oversized Amazon EC2 instances and modifying security group rules without using the appropriate change control process. A solutions architect must devise a
strategy to track and audit these inventory and configuration changes.
Which actions should the solutions architect take to meet these requirements? (Choose two.)
Enable AWS CloudTrail and use it for auditing.
Use data lifecycle policies for the Amazon EC2 instances.
Enable AWS Trusted Advisor and reference the security dashboard.
Enable AWS Config and create rules for auditing and compliance purposes.
Restore previous resource configurations with an AWS CloudFormation template.
Community vote distribution
AD (100%)
LuckyAro Highly Voted 4 months ago
A. Enable AWS CloudTrail and use it for auditing. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs and APIs. By enabling CloudTrail, the company can track user activity and changes to AWS resources, and monitor compliance with internal policies and external regulations.
D. Enable AWS Config and create rules for auditing and compliance purposes. AWS Config provides a detailed inventory of the AWS resources in your account, and continuously records changes to the configurations of those resources. By creating rules in AWS Config, the company can automate the evaluation of resource configurations against desired state, and receive alerts when configurations drift from compliance.
Options B, C, and E are not directly relevant to the requirement of tracking and auditing inventory and configuration changes.
upvoted 5 times
kruasan Most Recent 2 months ago
Enable AWS CloudTrail and use it for auditing.
AWS CloudTrail provides a record of API calls and can be used to audit changes made to EC2 instances and security groups. By analyzing CloudTrail logs, the solutions architect can track who provisioned oversized instances or modified security groups without proper approval.
D) Enable AWS Config and create rules for auditing and compliance purposes.
AWS Config can record the configuration changes made to resources like EC2 instances and security groups. The solutions architect can create AWS Config rules to monitor for non-compliant changes, like launching certain instance types or opening security group ports without permission. AWS Config would alert on any violations of these rules.
upvoted 1 times
kruasan 2 months ago
The other options would not fully meet the auditing and change tracking requirements:
Data lifecycle policies control when EC2 instances are backed up or deleted but do not audit configuration changes.
AWS Trusted Advisor security checks may detect some compliance violations after the fact but do not comprehensively log changes like AWS CloudTrail and AWS Config do.
E) CloudFormation templates enable rollback but do not provide an audit trail of changes. The solutions architect would not know who made unauthorized modifications in the first place.
upvoted 1 times
skiwili 4 months, 1 week ago
jennyka76 4 months, 1 week ago
AGREE WITH ANSWER - A & D
CloudTrail and Config
upvoted 1 times
Neha999 4 months, 1 week ago
CloudTrail and Config
upvoted 2 times
Question #319 Topic 1
A company has hundreds of Amazon EC2 Linux-based instances in the AWS Cloud. Systems administrators have used shared SSH keys to manage the instances. After a recent audit, the company’s security team is mandating the removal of all shared keys. A solutions architect must design a
solution that provides secure access to the EC2 instances.
Which solution will meet this requirement with the LEAST amount of administrative overhead?
Use AWS Systems Manager Session Manager to connect to the EC2 instances.
Use AWS Security Token Service (AWS STS) to generate one-time SSH keys on demand.
Allow shared SSH access to a set of bastion instances. Configure all other instances to allow only SSH access from the bastion instances.
Use an Amazon Cognito custom authorizer to authenticate users. Invoke an AWS Lambda function to generate a temporary SSH key.
Community vote distribution
A (85%) C (15%)
kruasan 2 months ago
AWS Systems Manager Session Manager provides secure shell access to EC2 instances without the need for SSH keys. It meets the security requirement to remove shared SSH keys while minimizing administrative overhead.
upvoted 1 times
kruasan 2 months ago
Session Manager is a fully managed AWS Systems Manager capability. With Session Manager, you can manage your Amazon Elastic Compute Cloud (Amazon EC2) instances, edge devices, on-premises servers, and virtual machines (VMs). You can use either an interactive one-click browser-based shell or the AWS Command Line Interface (AWS CLI). Session Manager provides secure and auditable node management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys. Session Manager also allows you to comply with corporate policies that require controlled access to managed nodes, strict security practices, and fully auditable logs with node access details, while providing end users with simple one-click cross-platform access to your managed nodes.
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html
upvoted 1 times
kruasan 2 months ago
Who should use Session Manager?
Any AWS customer who wants to improve their security and audit posture, reduce operational overhead by centralizing access control on managed nodes, and reduce inbound node access.
Information Security experts who want to monitor and track managed node access and activity, close down inbound ports on managed nodes, or allow connections to managed nodes that don't have a public IP address.
Administrators who want to grant and revoke access from a single location, and who want to provide one solution to users for Linux, macOS, and Windows Server managed nodes.
Users who want to connect to a managed node with just one click from the browser or AWS CLI without having to provide SSH keys.
upvoted 1 times
Stanislav4907 3 months, 2 weeks ago
You guys seriously don't want to go to SMSM for Avery Single EC2. You have to create solution not used services for one time access. Bastion will give you option to manage 1000s EC2 machines from 1. Plus you can use Ansible from it.
upvoted 2 times
Zox42 3 months ago
Question:" the company’s security team is mandating the removal of all shared keys", answer C can't be right because it says:"Allow shared SSH access to a set of bastion instances".
upvoted 2 times
UnluckyDucky 3 months, 1 week ago
Session Manager is the best practice and recommended way by Amazon to manage your instances. Bastion hosts require remote access therefore exposing them to the internet.
The most secure way is definitely session manager therefore answer A is correct imho.
upvoted 2 times
Steve_4542636 3 months, 3 weeks ago
I vote a
upvoted 1 times
LuckyAro 4 months ago
AWS Systems Manager Session Manager provides secure and auditable instance management without the need for any inbound connections or open ports. It allows you to manage your instances through an interactive one-click browser-based shell or through the AWS CLI. This means that you don't have to manage any SSH keys, and you don't have to worry about securing access to your instances as access is controlled through IAM policies.
upvoted 3 times
bdp123 4 months, 1 week ago
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html
upvoted 2 times
jahmad0730 4 months, 1 week ago
jennyka76 4 months, 1 week ago
ANSWER - A
AWS SESSION MANAGER IS CORRECT LEAST EFFORTS TO ACCESS LINUX SYSTEM IN AWS CONDOLE AND YOUR ARE ALREAADY LOGIN TO AWS. SO NO NEED FOR THE TOKEN OR OTHER STUFF DONE IN THE BACKGROUND BY AWS. MAKES SENESE.
upvoted 2 times
cloudbusting 4 months, 1 week ago
Answer is A
upvoted 3 times
zTopic 4 months, 1 week ago
VIad 4 months, 1 week ago
Answer is A
Using AWS Systems Manager Session Manager to connect to the EC2 instances is a secure option as it eliminates the need for inbound SSH ports and removes the requirement to manage SSH keys manually. It also provides a complete audit trail of user activity. This solution requires no additional software to be installed on the EC2 instances.
upvoted 4 times
Question #320 Topic 1
A company is using a fleet of Amazon EC2 instances to ingest data from on-premises data sources. The data is in JSON format and ingestion rates can be as high as 1 MB/s. When an EC2 instance is rebooted, the data in-flight is lost. The company’s data science team wants to query ingested data in near-real time.
Which solution provides near-real-time data querying that is scalable with minimal data loss?
Publish data to Amazon Kinesis Data Streams, Use Kinesis Data Analytics to query the data.
Publish data to Amazon Kinesis Data Firehose with Amazon Redshift as the destination. Use Amazon Redshift to query the data.
Store ingested data in an EC2 instance store. Publish data to Amazon Kinesis Data Firehose with Amazon S3 as the destination. Use Amazon Athena to query the data.
Store ingested data in an Amazon Elastic Block Store (Amazon EBS) volume. Publish data to Amazon ElastiCache for Redis. Subscribe to the Redis channel to query the data.
Community vote distribution
A (90%) 10%
LuckyAro Highly Voted 4 months ago
A: is the solution for the company's requirements. Publishing data to Amazon Kinesis Data Streams can support ingestion rates as high as 1 MB/s and provide real-time data processing. Kinesis Data Analytics can query the ingested data in real-time with low latency, and the solution can scale as needed to accommodate increases in ingestion rates or querying needs. This solution also ensures minimal data loss in the event of an EC2 instance reboot since Kinesis Data Streams has a persistent data store for up to 7 days by default.
upvoted 6 times
nublit Most Recent 1 month ago
Amazon Kinesis Data Firehose can deliver data in real-time to Amazon Redshift, making it immediately available for queries. Amazon Redshift, on the other hand, is a powerful data analytics service that allows fast and scalable querying of large volumes of data.
upvoted 1 times
kruasan 2 months ago
Provide near-real-time data ingestion into Kinesis Data Streams with the ability to handle the 1 MB/s ingestion rate. Data would be stored redundantly across shards.
Enable near-real-time querying of the data using Kinesis Data Analytics. SQL queries can be run directly against the Kinesis data stream.
Minimize data loss since data is replicated across shards. If an EC2 instance is rebooted, the data stream is still accessible.
Scale seamlessly to handle varying ingestion and query rates.
upvoted 2 times
kruasan 2 months ago
The other options would not fully meet the requirements:
Kinesis Firehose + Redshift would introduce latency since data must be loaded from Firehose into Redshift before querying. Redshift would lack real-time capabilities.
An EC2 instance store and Kinesis Firehose to S3 with Athena querying would risk data loss from instance store if an instance reboots. Athena querying data in S3 also lacks real-time capabilities.
Using EBS storage, Kinesis Firehose to Redis and subscribing to Redis may provide near-real-time ingestion and querying but risks data loss if an EBS volume or EC2 instance fails. Recovery requires re-hydrating data from a backup which impacts real-time needs.
upvoted 1 times
joechen2023 1 week, 5 days ago
I voted A as well, although not 100% sure why B is not correct. I just selected what seems the most simple solution between A and B.
Reason Kruasan gave "Redshift would lack real-time capabilities." This is not true. Redshift could do real-time. evidence https://aws.amazon.com/blogs/big-data/real-time-analytics-with-amazon-redshift-streaming-ingestion/
upvoted 1 times
jennyka76 4 months, 1 week ago
ANSWER - A
https://docs.aws.amazon.com/kinesisanalytics/latest/dev/what-is.html
upvoted 1 times
cloudbusting 4 months, 1 week ago
near-real-time data querying = Kinesis analytics
upvoted 2 times
zTopic 4 months, 1 week ago
Question #321 Topic 1
What should a solutions architect do to ensure that all objects uploaded to an Amazon S3 bucket are encrypted?
Update the bucket policy to deny if the PutObject does not have an s3:x-amz-acl header set.
Update the bucket policy to deny if the PutObject does not have an s3:x-amz-acl header set to private.
Update the bucket policy to deny if the PutObject does not have an aws:SecureTransport header set to true.
Update the bucket policy to deny if the PutObject does not have an x-amz-server-side-encryption header set.
Community vote distribution
D (100%)
bdp123 Highly Voted 4 months, 1 week ago
https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/#:~:text=Solution%20overview
upvoted 5 times
Grace83 3 months, 1 week ago
Thank you!
upvoted 1 times
kruasan Most Recent 1 month, 4 weeks ago
To encrypt an object at the time of upload, you need to add a header called x-amz-server-side-encryption to the request to tell S3 to encrypt the object using SSE-C, SSE-S3, or SSE-KMS. The following code example shows a Put request using SSE-S3. https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/
upvoted 3 times
kruasan 1 month, 4 weeks ago
The other options would not enforce encryption:
Requiring an s3:x-amz-acl header does not mandate encryption. This header controls access permissions.
Requiring an s3:x-amz-acl header set to private also does not enforce encryption. It only enforces private access permissions.
Requiring an aws:SecureTransport header ensures uploads use SSL but does not specify that objects must be encrypted. Encryption is not required when using SSL transport.
upvoted 2 times
kruasan 1 month, 4 weeks ago
To encrypt an object at the time of upload, you need to add a header called x-amz-server-side-encryption to the request to tell S3 to encrypt the object using SSE-C, SSE-S3, or SSE-KMS. The following code example shows a Put request using SSE-S3. https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/
upvoted 1 times
Sbbh 3 months, 1 week ago
Confusing question. It doesn't state clearly if the object needs to be encrypted at-rest or in-transit
upvoted 2 times
Steve_4542636 3 months, 3 weeks ago
I vote d
upvoted 1 times
LuckyAro 4 months ago
To ensure that all objects uploaded to an Amazon S3 bucket are encrypted, the solutions architect should update the bucket policy to deny any PutObject requests that do not have an x-amz-server-side-encryption header set. This will prevent any objects from being uploaded to the bucket unless they are encrypted using server-side encryption.
upvoted 3 times
jennyka76 4 months, 1 week ago
answer - D
upvoted 1 times
zTopic 4 months, 1 week ago
Answer is D
upvoted 1 times
Neorem 4 months, 1 week ago
https://docs.aws.amazon.com/AmazonS3/latest/userguide/amazon-s3-policy-keys.html
upvoted 1 times
Question #322 Topic 1
A solutions architect is designing a multi-tier application for a company. The application's users upload images from a mobile device. The application generates a thumbnail of each image and returns a message to the user to confirm that the image was uploaded successfully.
The thumbnail generation can take up to 60 seconds, but the company wants to provide a faster response time to its users to notify them that the original image was received. The solutions architect must design the application to asynchronously dispatch requests to the different application tiers.
What should the solutions architect do to meet these requirements?
Write a custom AWS Lambda function to generate the thumbnail and alert the user. Use the image upload process as an event source to invoke the Lambda function.
Create an AWS Step Functions workflow. Configure Step Functions to handle the orchestration between the application tiers and alert the user when thumbnail generation is complete.
Create an Amazon Simple Queue Service (Amazon SQS) message queue. As images are uploaded, place a message on the SQS queue for thumbnail generation. Alert the user through an application message that the image was received.
Create Amazon Simple Notification Service (Amazon SNS) notification topics and subscriptions. Use one subscription with the application to generate the thumbnail after the image upload is complete. Use a second subscription to message the user's mobile app by way of a push notification after thumbnail generation is complete.
Community vote distribution
C (88%) 13%
Steve_4542636 Highly Voted 3 months, 3 weeks ago
I've noticed there are a lot of questions about decoupling services and SQS is almost always the answer.
upvoted 13 times
Neha999 Highly Voted 4 months, 1 week ago
D
SNS fan out
upvoted 7 times
Zox42 Most Recent 3 months ago
Answers B and D alert the user when thumbnail generation is complete. Answer C alerts the user through an application message that the image was received.
upvoted 3 times
Sbbh 3 months, 1 week ago
B:
Use cases for Step Functions vary widely, from orchestrating serverless microservices, to building data-processing pipelines, to defining a security-incident response. As mentioned above, Step Functions may be used for synchronous and asynchronous business processes.
upvoted 1 times
AlessandraSAA 3 months, 3 weeks ago
why not B?
upvoted 3 times
Wael216 3 months, 3 weeks ago
Creating an Amazon Simple Queue Service (SQS) message queue and placing messages on the queue for thumbnail generation can help separate the image upload and thumbnail generation processes.
upvoted 1 times
vindahake 3 months, 4 weeks ago
C
The key here is "a faster response time to its users to notify them that the original image was received." i.e user needs to be notified when image was received and not after thumbnail was created.
upvoted 2 times
AlmeroSenior 4 months ago
A looks like the best way , but its essentially replacing the mentioned app , that's not the ask
upvoted 1 times
Mickey321 4 months ago
Selected Answer: A https://docs.aws.amazon.com/lambda/latest/dg/with-s3-tutorial.html
upvoted 1 times
bdp123 4 months ago
C is the only one that makes sense
upvoted 1 times
LuckyAro 4 months ago
Use a custom AWS Lambda function to generate the thumbnail and alert the user. Lambda functions are well-suited for short-lived, stateless operations like generating thumbnails, and they can be triggered by various events, including image uploads. By using Lambda, the application can quickly confirm that the image was uploaded successfully and then asynchronously generate the thumbnail. When the thumbnail is generated, the Lambda function can send a message to the user to confirm that the thumbnail is ready.
C proposes to use an Amazon Simple Queue Service (Amazon SQS) message queue to process image uploads and generate thumbnails. SQS can help decouple the image upload process from the thumbnail generation process, which is helpful for asynchronous processing. However, it may not be the most suitable option for quickly alerting the user that the image was received, as the user may have to wait until the thumbnail is generated before receiving a notification.
upvoted 2 times
Bhrino 4 months, 1 week ago
This is A because SNS and SQS dont work because it can take up to 60 seconds and b is just more complex than a
upvoted 1 times
CapJackSparrow 3 months, 2 weeks ago Does Lambda not time out after 15 seconds? upvoted 1 times
MssP 3 months ago
15 min.
upvoted 1 times
jennyka76 4 months, 1 week ago
answer - c
upvoted 1 times
rrharris 4 months, 1 week ago
Answer is C
upvoted 1 times
zTopic 4 months, 1 week ago
The solutions architect can use Amazon Simple Queue Service (SQS) to manage the messages and dispatch the requests in a scalable and decoupled manner. Therefore, the correct answer is C.
upvoted 2 times
Question #323 Topic 1
A company’s facility has badge readers at every entrance throughout the building. When badges are scanned, the readers send a message over HTTPS to indicate who attempted to access that particular entrance.
A solutions architect must design a system to process these messages from the sensors. The solution must be highly available, and the results must be made available for the company’s security team to analyze.
Which system architecture should the solutions architect recommend?
Launch an Amazon EC2 instance to serve as the HTTPS endpoint and to process the messages. Configure the EC2 instance to save the results to an Amazon S3 bucket.
Create an HTTPS endpoint in Amazon API Gateway. Configure the API Gateway endpoint to invoke an AWS Lambda function to process the messages and save the results to an Amazon DynamoDB table.
Use Amazon Route 53 to direct incoming sensor messages to an AWS Lambda function. Configure the Lambda function to process the messages and save the results to an Amazon DynamoDB table.
Create a gateway VPC endpoint for Amazon S3. Configure a Site-to-Site VPN connection from the facility network to the VPC so that sensor data can be written directly to an S3 bucket by way of the VPC endpoint.
Community vote distribution
B (100%)
kruasan Highly Voted 1 month, 4 weeks ago
Option A would not provide high availability. A single EC2 instance is a single point of failure.
Option B provides a scalable, highly available solution using serverless services. API Gateway and Lambda can scale automatically, and DynamoDB provides a durable data store.
Option C would expose the Lambda function directly to the public Internet, which is not a recommended architecture. API Gateway provides an abstraction layer and additional features like access control.
Option D requires configuring a VPN to AWS which adds complexity. It also saves the raw sensor data to S3, rather than processing it and storing the results.
upvoted 5 times
Steve_4542636 Most Recent 3 months, 3 weeks ago
I vote B
upvoted 1 times
KZM 4 months ago
It is option "B"
Option "B" can provide a system with highly scalable, fault-tolerant, and easy to manage.
upvoted 1 times
LuckyAro 4 months ago
Deploy Amazon API Gateway as an HTTPS endpoint and AWS Lambda to process and save the messages to an Amazon DynamoDB table. This option provides a highly available and scalable solution that can easily handle large amounts of data. It also integrates with other AWS services, making it easier to analyze and visualize the data for the security team.
upvoted 3 times
zTopic 4 months, 1 week ago
Question #324 Topic 1
A company wants to implement a disaster recovery plan for its primary on-premises file storage volume. The file storage volume is mounted from an Internet Small Computer Systems Interface (iSCSI) device on a local storage server. The file storage volume holds hundreds of terabytes (TB) of data.
The company wants to ensure that end users retain immediate access to all file types from the on-premises systems without experiencing latency. Which solution will meet these requirements with the LEAST amount of change to the company's existing infrastructure?
Provision an Amazon S3 File Gateway as a virtual machine (VM) that is hosted on premises. Set the local cache to 10 TB. Modify existing applications to access the files through the NFS protocol. To recover from a disaster, provision an Amazon EC2 instance and mount the S3 bucket that contains the files.
Provision an AWS Storage Gateway tape gateway. Use a data backup solution to back up all existing data to a virtual tape library. Configure the data backup solution to run nightly after the initial backup is complete. To recover from a disaster, provision an Amazon EC2 instance and restore the data to an Amazon Elastic Block Store (Amazon EBS) volume from the volumes in the virtual tape library.
Provision an AWS Storage Gateway Volume Gateway cached volume. Set the local cache to 10 TB. Mount the Volume Gateway cached volume to the existing file server by using iSCSI, and copy all files to the storage volume. Configure scheduled snapshots of the storage
volume. To recover from a disaster, restore a snapshot to an Amazon Elastic Block Store (Amazon EBS) volume and attach the EBS volume to an Amazon EC2 instance.
Provision an AWS Storage Gateway Volume Gateway stored volume with the same amount of disk space as the existing file storage volume.
Mount the Volume Gateway stored volume to the existing file server by using iSCSI, and copy all files to the storage volume. Configure
scheduled snapshots of the storage volume. To recover from a disaster, restore a snapshot to an Amazon Elastic Block Store (Amazon EBS) volume and attach the EBS volume to an Amazon EC2 instance.
Community vote distribution
D (70%) C (30%)
Grace83 Highly Voted 3 months, 1 week ago
D is the correct answer
Volume Gateway CACHED Vs STORED
Cached = stores a subset of frequently accessed data locally
Stored = Retains the ENTIRE ("all file types") in on prem data centre
upvoted 7 times
alexandercamachop Most Recent 1 month ago
Correct answer is Volume Gateway Stored which keeps all data on premises.
To have immediate access to the data. Cached is for frequently accessed data only.
upvoted 1 times
omoakin 1 month ago
CCCCCCCCCCCCCCCC
upvoted 1 times
lucdt4 1 month, 1 week ago
D is the correct answer
Volume Gateway CACHED Vs STORED Cached = stores a data recentlly at local
Stored = Retains the ENTIRE ("all file types") in on prem data centre
upvoted 1 times
rushi0611 1 month, 3 weeks ago
In the cached mode, your primary data is written to S3, while retaining your frequently accessed data locally in a cache for low-latency access.
In the stored mode, your primary data is stored locally and your entire dataset is available for low-latency access while asynchronously backed up to AWS.
Reference: https://aws.amazon.com/storagegateway/faqs/ Good luck.
upvoted 1 times
It is stated the company wants to keep the data locally and have DR plan in cloud. It points directly to the volume gateway
upvoted 1 times
UnluckyDucky 3 months, 1 week ago
"The company wants to ensure that end users retain immediate access to all file types from the on-premises systems "
D is the correct answer.
upvoted 2 times
CapJackSparrow 3 months, 2 weeks ago
all file types, NOT all files. Volume mode can not cache 100TBs.
upvoted 2 times
eddie5049 1 month, 3 weeks ago
https://docs.aws.amazon.com/storagegateway/latest/vgw/StorageGatewayConcepts.html
Stored volumes can range from 1 GiB to 16 TiB in size and must be rounded to the nearest GiB. Each gateway configured for stored volumes can support up to 32 volumes and a total volume storage of 512 TiB (0.5 PiB).
upvoted 1 times
all file types. Cached only save the most frecuently or lastest accesed. If you didn´t access any type for a long time, you will not cache it -> No immediate access
upvoted 2 times
WherecanIstart 3 months, 2 weeks ago
"The company wants to ensure that end users retain immediate access to all file types from the on-premises systems "
This points to stored volumes..
upvoted 1 times
Option D is the right choice for this question . "The company wants to ensure that end users retain immediate access to all file types from the onpremises systems "
Cached volumes: low latency access to most recent data
Stored volumes: entire dataset is on premise, scheduled backups to S3 Hence Volume Gateway stored volume is the apt choice.
upvoted 2 times
bangfire 3 months, 2 weeks ago
Answer is C.
Option D is not the best solution because a Volume Gateway stored volume does not provide immediate access to all file types and would require additional steps to retrieve data from Amazon S3, which can result in latency for end-users.
upvoted 2 times
UnluckyDucky 3 months, 2 weeks ago
You're confusing cached mode with stored volume mode.
upvoted 1 times
Answer is C. why?
https://docs.aws.amazon.com/storagegateway/latest/vgw/StorageGatewayConcepts.html#storage-gateway-stored-volume-concepts
"Stored volumes can range from 1 GiB to 16 TiB in size and must be rounded to the nearest GiB. Each gateway configured for stored volumes can support up to 32 volumes and a total volume storage of 512 TiB"
Option D states: "Provision an AWS Storage Gateway Volume Gateway stored *volume* with the same amount of disk space as the existing file storage volume.".
Notice that it states volume and not volumes, which would be the only way to match the information that the question provides. Initial question states that on-premise volume is 100s of TB in size.
Therefore, only logical and viable answer can be C.
Feel free to prove me wrong
upvoted 3 times
eddie5049 1 month, 3 weeks ago
Stored volumes can range from 1 GiB to 16 TiB in size and must be rounded to the nearest GiB. Each gateway configured for stored volumes can support up to 32 volumes and a total volume storage of 512 TiB (0.5 PiB).
why not configure multiple gateway to achieve the hundreds of TB?
upvoted 1 times
Steve_4542636 3 months, 3 weeks ago
Stored Volume Gateway will retain ALL data locally whereas Cached Volume Gateway retains frequently accessed data locally
upvoted 3 times
KZM 4 months ago
As per the given information, option 'C' can support the Company's requirements with the LEAST amount of change to the existing infrastructure, I think.
https://aws.amazon.com/storagegateway/volume/
upvoted 2 times
bdp123 4 months ago
the " all file types" is confusing - does not say "all files" - also, hundreds of Terabytes is enormously large to maintain all files on-prem. Cache volume is also low latency
upvoted 2 times
LuckyAro 4 months ago
rrharris 4 months, 1 week ago
Answer is D - Retain Immediate Access
upvoted 3 times
Question #325 Topic 1
A company is hosting a web application from an Amazon S3 bucket. The application uses Amazon Cognito as an identity provider to authenticate users and return a JSON Web Token (JWT) that provides access to protected resources that are stored in another S3 bucket.
Upon deployment of the application, users report errors and are unable to access the protected content. A solutions architect must resolve this issue by providing proper permissions so that users can access the protected content.
Which solution meets these requirements?
A. Update the Amazon Cognito identity pool to assume the proper IAM role for access to the protected content.
B. Update the S3 ACL to allow the application to access the protected content.
C. Redeploy the application to Amazon S3 to prevent eventually consistent reads in the S3 bucket from affecting the ability of users to access the protected content.
D. Update the Amazon Cognito pool to use custom attribute mappings within the identity pool and grant users the proper permissions to access the protected content.
Community vote distribution
A (82%) D (18%)
Abrar2022 2 weeks, 3 days ago
Services access other services via IAM Roles. Hence why updating AWS Cognito identity pool to assume proper IAM Role is the right solution.
upvoted 1 times
alexandercamachop 1 month ago
To resolve the issue and provide proper permissions for users to access the protected content, the recommended solution is:
A. Update the Amazon Cognito identity pool to assume the proper IAM role for access to the protected content. Explanation:
Amazon Cognito provides authentication and user management services for web and mobile applications.
In this scenario, the application is using Amazon Cognito as an identity provider to authenticate users and obtain JSON Web Tokens (JWTs). The JWTs are used to access protected resources stored in another S3 bucket.
To grant users access to the protected content, the proper IAM role needs to be assumed by the identity pool in Amazon Cognito.
By updating the Amazon Cognito identity pool with the appropriate IAM role, users will be authorized to access the protected content in the S3 bucket.
upvoted 1 times
alexandercamachop 1 month ago
Option B is incorrect because updating the S3 ACL (Access Control List) will only affect the permissions of the application, not the users accessing the content.
Option C is incorrect because redeploying the application to Amazon S3 will not resolve the issue related to user access permissions.
Option D is incorrect because updating custom attribute mappings in Amazon Cognito will not directly grant users the proper permissions to access the protected content.
upvoted 1 times
shanwford 2 months, 2 weeks ago
Amazon Cognito identity pools assign your authenticated users a set of temporary, limited-privilege credentials to access your AWS resources. The permissions for each user are controlled through IAM roles that you create. https://docs.aws.amazon.com/cognito/latest/developerguide/role-based-access-control.html
upvoted 1 times
Brak 3 months, 3 weeks ago
A makes no sense - Cognito is not accessing the S3 resource. It just returns the JWT token that will be attached to the S3 request.
D is the right answer, using custom attributes that are added to the JWT and used to grant permissions in S3. See https://docs.aws.amazon.com/cognito/latest/developerguide/using-attributes-for-access-control-policy-example.html for an example.
upvoted 2 times
Abhineet9148232 3 months, 3 weeks ago
But even D requires setting up the permissions as bucket policy (as show in the shared example) which includes higher overhead than managing permissions attached to specific roles.
upvoted 2 times
asoli 3 months, 1 week ago
A says "Identity Pool"
According to AWS: "With an identity pool, your users can obtain temporary AWS credentials to access AWS services, such as Amazon S3 and DynamoDB."
So, answer is A
upvoted 1 times
Steve_4542636 3 months, 3 weeks ago
Services access other services via IAM Roles.
upvoted 1 times
LuckyAro 4 months ago
A is the best solution as it directly addresses the issue of permissions and grants authenticated users the necessary IAM role to access the protected content.
A suggests updating the Amazon Cognito identity pool to assume the proper IAM role for access to the protected content. This is a valid solution, as it would grant authenticated users the necessary permissions to access the protected content.
upvoted 3 times
jennyka76 4 months, 1 week ago
ANSWER - A
https://docs.aws.amazon.com/cognito/latest/developerguide/tutorial-create-identity-pool.html You have to create an custom role such as read-only
upvoted 4 times
zTopic 4 months, 1 week ago
Question #326 Topic 1
An image hosting company uploads its large assets to Amazon S3 Standard buckets. The company uses multipart upload in parallel by using S3 APIs and overwrites if the same object is uploaded again. For the first 30 days after upload, the objects will be accessed frequently. The objects will be used less frequently after 30 days, but the access patterns for each object will be inconsistent. The company must optimize its S3 storage costs while maintaining high availability and resiliency of stored assets.
Which combination of actions should a solutions architect recommend to meet these requirements? (Choose two.)
Move assets to S3 Intelligent-Tiering after 30 days.
Configure an S3 Lifecycle policy to clean up incomplete multipart uploads.
Configure an S3 Lifecycle policy to clean up expired object delete markers.
Move assets to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days.
Move assets to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days.
Community vote distribution
AB (58%) BD (35%) 7%
Neha999 Highly Voted 4 months, 1 week ago
AB
A : Access Pattern for each object inconsistent, Infrequent Access
B : Deleting Incomplete Multipart Uploads to Lower Amazon S3 Costs
upvoted 14 times
TungPham Highly Voted 4 months, 1 week ago
B because Abort Incomplete Multipart Uploads Using S3 Lifecycle => https://aws.amazon.com/blogs/aws-cloud-financial-management/discovering-and-deleting-incomplete-multipart-uploads-to-lower-amazon-s3-costs/
A because The objects will be used less frequently after 30 days, but the access patterns for each object will be inconsistent => random access =>
S3 Intelligent-Tiering
upvoted 8 times
MrAWSAssociate Most Recent 1 week ago
Option A has not been mentioned for resiliency in S3, check the page: https://docs.aws.amazon.com/AmazonS3/latest/userguide/disaster-recovery-resiliency.html
Therefore, I am with B & D choices.
upvoted 1 times
alexandercamachop 1 month ago
A. Move assets to S3 Intelligent-Tiering after 30 days.
B. Configure an S3 Lifecycle policy to clean up incomplete multipart uploads. Explanation:
A. Moving assets to S3 Intelligent-Tiering after 30 days: This storage class automatically analyzes the access patterns of objects and moves them between frequent access and infrequent access tiers. Since the objects will be accessed frequently for the first 30 days, storing them in the frequent access tier during that period optimizes performance. After 30 days, when the access patterns become inconsistent, S3 Intelligent-Tiering will automatically move the objects to the infrequent access tier, reducing storage costs.
B. Configuring an S3 Lifecycle policy to clean up incomplete multipart uploads: Multipart uploads are used for large objects, and incomplete multipart uploads can consume storage space if not cleaned up. By configuring an S3 Lifecycle policy to clean up incomplete multipart uploads, unnecessary storage costs can be avoided.
upvoted 1 times
antropaws 1 month ago
AD.
B makes no sense because multipart uploads overwrite objects that are already uploaded. The question never says this is a problem.
upvoted 1 times
klayytech 3 months ago
the following two actions to optimize S3 storage costs while maintaining high availability and resiliency of stored assets:
A. Move assets to S3 Intelligent-Tiering after 30 days. This will automatically move objects between two access tiers based on changing access patterns and save costs by reducing the number of objects stored in the expensive tier.
B. Configure an S3 Lifecycle policy to clean up incomplete multipart uploads. This will help to reduce storage costs by removing incomplete multipart uploads that are no longer needed.
upvoted 2 times
B = Deleting incomplete uploads will lower S3 cost.
and D: as "For the first 30 days after upload, the objects will be accessed frequently"
Intelligent checks and if file haven't been access for 30 consecutive days and send infrequent access.So if somebody accessed the file 20 days after the upload with the intelligent process, file will be moved to Infrequent Access tier after 50 days. Which will reflect against the COST.
"S3 Intelligent-Tiering monitors access patterns and moves objects that have not been accessed for 30 consecutive days to the Infrequent Access tier and after 90 days of no access to the Archive Instant Access tier. For data that does not require immediate retrieval, you can set up S3 Intelligent-Tiering to monitor and automatically move objects that aren’t accessed for 180 days or more to the Deep Archive Access tier to realize up to 95% in storage cost savings."
https://aws.amazon.com/s3/storage-classes/#Unknown_or_changing_access
upvoted 2 times
Apologies D is wrong for sure lol
"S3 Standard-IA is for data that is accessed less frequently, but requires rapid access when needed." and for the first 30 days data is frequently accessed lol.
So best solution will be A - Amazon S3 Intelligent-Tiering
upvoted 2 times
sorry remove the above comment, as we are setting solution which will be needed after 30 Days
this should be : Amazon S3 Standard-Infrequent Access (S3 Standard-IA)
upvoted 2 times
Infrequent access is written in the question so it's BD
upvoted 1 times
It is not infrequent... it is LESS frequent. It can be few less or too much less (infrequent) but it is clear that pattern is inconsistent -> A
upvoted 1 times
The answer is AB
A: "the access patterns for each object will be inconsistent" so Intelligent-Tiering works well for this assumption (even better than D. It may put it in lower tiers based on access patterns that Standard-IA)
D: incomplete multipart is just a waste of resources
upvoted 2 times
I meant B: incomplete multipart is just a waste of resources
upvoted 1 times
AlessandraSAA 3 months, 2 weeks ago
upvoted 3 times
AB, Unknown of changing access pattern https://aws.amazon.com/s3/storage-classes/
upvoted 1 times
I think B is obvious, and I chose A because the pattern is unpredictable
upvoted 2 times
Maximus007 3 months, 2 weeks ago
B is clear
the choice might be between A and D
I vote for A - S3 Intelligent-Tiering will analyze patterns and decide properly
upvoted 1 times
taehyeki 3 months, 3 weeks ago
i think b , d make more sense
it is no matter where each object is moved,
we only know object is not accessed frequently after 30days so i go with D
upvoted 2 times
Abhineet9148232 3 months, 3 weeks ago
S3-IA provides same low latency and high throughput performance of S3 Standard. Ideal for infrequent but high throughput access.
https://aws.amazon.com/s3/storage-classes/#Unknown_or_changing_access
upvoted 1 times
Steve_4542636 3 months, 3 weeks ago
For A vs D, this comment is "but the access patterns for each object will be inconsistent." That means some object will be accessed, others will not. This will give the Inteligent tier the opportunity to move the S3 object to Glacier Instant Retireval which still has very low latency. This is a confusing question though since Inteligent tiering does add additional costs per object.
upvoted 2 times
HaineHess 3 months, 4 weeks ago
b d for cost saving & high availability
upvoted 1 times
Question #327 Topic 1
A solutions architect must secure a VPC network that hosts Amazon EC2 instances. The EC2 instances contain highly sensitive data and run in a private subnet. According to company policy, the EC2 instances that run in the VPC can access only approved third-party software repositories on the internet for software product updates that use the third party’s URL. Other internet traffic must be blocked.
Which solution meets these requirements?
Update the route table for the private subnet to route the outbound traffic to an AWS Network Firewall firewall. Configure domain list rule groups.
Set up an AWS WAF web ACL. Create a custom set of rules that filter traffic requests based on source and destination IP address range sets.
Implement strict inbound security group rules. Configure an outbound rule that allows traffic only to the authorized software repositories on the internet by specifying the URLs.
Configure an Application Load Balancer (ALB) in front of the EC2 instances. Direct all outbound traffic to the ALB. Use a URL-based rule listener in the ALB’s target group for outbound access to the internet.
Community vote distribution
A (88%) 12%
Bhawesh Highly Voted 4 months ago
Correct Answer A. Send the outbound connection from EC2 to Network Firewall. In Network Firewall, create stateful outbound rules to allow certain domains for software patch download and deny all other domains.
https://docs.aws.amazon.com/network-firewall/latest/developerguide/suricata-examples.html#suricata-example-domain-filtering
upvoted 9 times
jennyka76 Highly Voted 4 months, 1 week ago
Answer - A
https://aws.amazon.com/premiumsupport/knowledge-center/ec2-al1-al2-update-yum-without-internet/
upvoted 5 times
asoli 3 months, 1 week ago
Although the answer is A, the link you provided here is not related to this question. The information about "Network Firewall" and how it can help this issue is here:
https://docs.aws.amazon.com/network-firewall/latest/developerguide/suricata-examples.html#suricata-example-domain-filtering
(thanks to "@Bhawesh" to provide the link in their answer)
upvoted 3 times
kelvintoys93 Most Recent 1 week, 5 days ago
Isnt private subnet not connectible to internet at all, unless with a NAT gateway?
upvoted 1 times
UnluckyDucky 3 months, 2 weeks ago
Can't use URLs in outbound rule of security groups. URL Filtering screams Firewall.
upvoted 4 times
VeseljkoD 3 months, 3 weeks ago
We can't specifu URL in outbound rule of security group. Create free tier AWS account and test it.
upvoted 2 times
Leo301 3 months, 3 weeks ago
CCCCCCCCCCC
upvoted 1 times
Brak 3 months, 3 weeks ago
It can't be C. You cannot use URLs in the outbound rules of a security group.
upvoted 3 times
johnmcclane78 3 months, 3 weeks ago
Option C is the best solution to meet the requirements of this scenario. Implementing strict inbound security group rules that only allow traffic from approved sources can help secure the VPC network that hosts Amazon EC2 instances. Additionally, configuring an outbound rule that allows traffic only to the authorized software repositories on the internet by specifying the URLs will ensure that only approved third-party software repositories can be accessed from the EC2 instances. This solution does not require any additional AWS services and can be implemented using VPC security groups.
Option A is not the best solution as it involves the use of AWS Network Firewall, which may introduce additional operational overhead. While domain list rule groups can be used to block all internet traffic except for the approved third-party software repositories, this solution is more complex than necessary for this scenario.
upvoted 2 times
Steve_4542636 3 months, 3 weeks ago
In the security group, only allow inbound traffic originating from the VPC. Then only allow outbound traffic with a whitelisted IP address. The question asks about blocking EC2 instances, which is best for security groups since those are at the EC2 instance level. A network firewall is at the VPC level, which is not what the question is asking to protect.
upvoted 1 times
Theodorz 3 months, 3 weeks ago
Is Security Group able to allow a specific URL? According to https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html, I cannot find such description.
upvoted 2 times
KZM 4 months ago
I am confused that It seems both options A and C are valid solutions.
upvoted 3 times
ruqui 1 month ago
C is not valid. Security groups can allow only traffic from specific ports and/or IPs, you can't use an URL. Correct answer is A
upvoted 1 times
Zohx 3 months, 4 weeks ago
Same here - why is C not a valid option?
upvoted 2 times
Karlos99 3 months, 3 weeks ago And it is easier to do it at the level upvoted 1 times
Karlos99 3 months, 3 weeks ago
And it is easier to do it at the VPC level
upvoted 1 times
Karlos99 3 months, 3 weeks ago
Because in this case, the session is initialized from inside
upvoted 1 times
Neha999 4 months, 1 week ago
A as other options are controlling inbound traffic
upvoted 4 times
Question #328 Topic 1
A company is hosting a three-tier ecommerce application in the AWS Cloud. The company hosts the website on Amazon S3 and integrates the website with an API that handles sales requests. The company hosts the API on three Amazon EC2 instances behind an Application Load Balancer (ALB). The API consists of static and dynamic front-end content along with backend workers that process sales requests asynchronously.
The company is expecting a significant and sudden increase in the number of sales requests during events for the launch of new products. What should a solutions architect recommend to ensure that all the requests are processed successfully?
Add an Amazon CloudFront distribution for the dynamic content. Increase the number of EC2 instances to handle the increase in traffic.
Add an Amazon CloudFront distribution for the static content. Place the EC2 instances in an Auto Scaling group to launch new instances based on network traffic.
Add an Amazon CloudFront distribution for the dynamic content. Add an Amazon ElastiCache instance in front of the ALB to reduce traffic for the API to handle.
Add an Amazon CloudFront distribution for the static content. Add an Amazon Simple Queue Service (Amazon SQS) queue to receive requests from the website for later processing by the EC2 instances.
Community vote distribution
D (58%) B (42%)
Steve_4542636 Highly Voted 3 months, 3 weeks ago
The auto-scaling would increase the rate at which sales requests are "processed", whereas a SQS will ensure messages don't get lost. If you were at a fast food restaurant with a long line with 3 cash registers, would you want more cash registers or longer ropes to handle longer lines? Same concept here.
upvoted 13 times
joechen2023 1 week, 5 days ago
As an architecture, it is not possible to add more backend workers (it is part of the HR and boss's job, not for architecture design the solution). So when the demand surge, the only correct choice is to buffer them using SQS so that workers can take their time to process it successfully
upvoted 1 times
rushi0611 1 month, 3 weeks ago
"ensure that all the requests are processed successfully?"
we want to ensure success not the speed, even in the auto-scaling, there is the chance for the failure of the request but not in SQS- if it is failed in sqs it is sent back to the queue again and new consumer will pick the request.
upvoted 3 times
lizzard812 3 months ago
Hell true: I'd rather combine the both options: a SQS + auto-scaled bound to the length of the queue.
upvoted 5 times
antropaws Most Recent 1 month ago
Abhineet9148232 1 month, 3 weeks ago
B doesn't fit because Auto Scaling alone does not guarantee that all requests will be processed successfully, which the question clearly asks for.
D ensures that all messages are processed.
upvoted 3 times
kruasan 1 month, 4 weeks ago
An SQS queue acts as a buffer between the frontend (website) and backend (API). Web requests can dump messages into the queue at a high throughput, then the queue handles delivering those messages to the API at a controlled rate that it can sustain. This prevents the API from being overwhelmed.
upvoted 1 times
kruasan 1 month, 4 weeks ago
Options A and B would help by scaling out more instances, however, this may not scale quickly enough and still risks overwhelming the API. Caching parts of the dynamic content (option C) may help but does not provide the buffering mechanism that a queue does.
upvoted 1 times
D make sens
upvoted 1 times
kraken21 2 months, 4 weeks ago
D makes more sense
upvoted 1 times
kraken21 2 months, 4 weeks ago
There is no clarity on what the asynchronous process is but D makes more sense if we want to process all requests successfully. The way the question is worded it looks like the msgs->SQS>ELB/Ec2. This ensures that the messages are processed but may be delayed as the load increases.
upvoted 1 times
although i agree with B for better performance. but i choose 'D' as question request to ensure that all the requests are processed successfully.
upvoted 2 times
klayytech 2 months, 4 weeks ago
To ensure that all the requests are processed successfully, I would recommend adding an Amazon CloudFront distribution for the static content and an Amazon CloudFront distribution for the dynamic content. This will help to reduce the load on the API and improve its performance. You can also place the EC2 instances in an Auto Scaling group to launch new instances based on network traffic. This will help to ensure that you have enough capacity to handle the increase in traffic during events for the launch of new products.
upvoted 1 times
The company is expecting a significant and sudden increase in the number of sales requests and keyword async. So I feel option D suits here.
upvoted 1 times
Critical here is "to ensure that all the requests". ALL REQUESTS, so it is only possible with a SQS. ASG can spend time to launch new instances so any request can be lost.
upvoted 3 times
I vote for D. "The company is expecting a significant and sudden increase in the number of sales requests". Sudden increase means ASG might not be able to deploy more EC2 instances when requests rocket and some of request will get lost.
upvoted 2 times
The keyword here about the orders is "asynchronously". Orders are supposed to process asynchronously. So, it can be published in an SQS and processed after that. Also, it ensures in a spike, there is no lost order.
In contrast, if you think the answer is B, the issue is the sudden spike. Maybe the auto-scaling is not acting fast enough and some orders are lost. So, B i snot correct.
upvoted 2 times
harirkmusa 3 months, 3 weeks ago
Selected D
upvoted 1 times
taehyeki 3 months, 3 weeks ago
anwer d
upvoted 1 times
I think D.
It may be SQS as per the points,
>workers process sales requests asynchronously and
?the requests are processed successfully,
upvoted 3 times
Based on the provided information, the best option is B. Add an Amazon CloudFront distribution for the static content. Place the EC2 instances in an Auto Scaling group to launch new instances based on network traffic.
This option addresses the need for scaling the infrastructure to handle the increase in traffic by adding an Auto Scaling group to the existing EC2 instances, which allows for automatic scaling based on network traffic. Additionally, adding an Amazon CloudFront distribution for the static content will improve the performance of the website by caching content closer to the end-users.
upvoted 3 times
LuckyAro 4 months ago
D maybe inappropriate for this scenario because by adding an Amazon CloudFront distribution for the static content and adding an Amazon Simple Queue Service (Amazon SQS) queue to receive requests from the website for later processing by the EC2 instances, is not the best option as it adds unnecessary complexity to the system. It would be better to add an Auto Scaling group to handle the increased traffic.
upvoted 1 times
Steve_4542636 3 months, 3 weeks ago
SQS also doesn't ensure real-time processing since the EC2s would be the bottleneck.
upvoted 1 times
MssP 3 months ago
Where you see real-time processing?? Here the question is ensure to process ALL requests, not real-time.
upvoted 1 times
nder 4 months ago
No, because you must ensure the requests are processed successfully. If there is a sudden spike in usage some messages might be missed whereas with SQS the messages must be processed before being removed from the queue. Answer D is correct
upvoted 1 times
Question #329 Topic 1
A security audit reveals that Amazon EC2 instances are not being patched regularly. A solutions architect needs to provide a solution that will run regular security scans across a large fleet of EC2 instances. The solution should also patch the EC2 instances on a regular schedule and provide a report of each instance’s patch status.
Which solution will meet these requirements?
Set up Amazon Macie to scan the EC2 instances for software vulnerabilities. Set up a cron job on each EC2 instance to patch the instance on a regular schedule.
Turn on Amazon GuardDuty in the account. Configure GuardDuty to scan the EC2 instances for software vulnerabilities. Set up AWS Systems Manager Session Manager to patch the EC2 instances on a regular schedule.
Set up Amazon Detective to scan the EC2 instances for software vulnerabilities. Set up an Amazon EventBridge scheduled rule to patch the EC2 instances on a regular schedule.
Turn on Amazon Inspector in the account. Configure Amazon Inspector to scan the EC2 instances for software vulnerabilities. Set up AWS Systems Manager Patch Manager to patch the EC2 instances on a regular schedule.
Community vote distribution
D (100%)
elearningtakai 3 months ago
Amazon Inspector is a security assessment service that automatically assesses applications for vulnerabilities or deviations from best practices. It can be used to scan the EC2 instances for software vulnerabilities. AWS Systems Manager Patch Manager can be used to patch the EC2 instances on a regular schedule. Together, these services can provide a solution that meets the requirements of running regular security scans and patching EC2 instances on a regular schedule. Additionally, Patch Manager can provide a report of each instance’s patch status.
upvoted 1 times
Steve_4542636 3 months, 3 weeks ago
Inspecter is for EC2 instances and network accessibility of those instances https://portal.tutorialsdojo.com/forums/discussion/difference-between-security-hub-detective-and-inspector/
upvoted 1 times
LuckyAro 4 months ago
Amazon Inspector is a security assessment service that helps improve the security and compliance of applications deployed on Amazon Web Services (AWS). It automatically assesses applications for vulnerabilities or deviations from best practices. Amazon Inspector can be used to identify security issues and recommend fixes for them. It is an ideal solution for running regular security scans across a large fleet of EC2 instances.
AWS Systems Manager Patch Manager is a service that helps you automate the process of patching Windows and Linux instances. It provides a simple, automated way to patch your instances with the latest security patches and updates. Patch Manager helps you maintain compliance with security policies and regulations by providing detailed reports on the patch status of your instances.
upvoted 1 times
TungPham 4 months, 1 week ago
Amazon Inspector for EC2 https://aws.amazon.com/vi/inspector/faqs/?nc1=f_ls
Amazon system manager Patch manager for automates the process of patching managed nodes with both security-related updates and other types of updates.
http://webcache.googleusercontent.com/search?q=cache:FbFTc6XKycwJ:https://medium.com/aws-architech/use-case-aws-inspector-vs-guardduty-3662bf80767a&hl=vi&gl=kr&strip=1&vwsrc=0
upvoted 2 times
jennyka76 4 months, 1 week ago
answer - D https://aws.amazon.com/inspector/faqs/
upvoted 1 times
Neha999 4 months, 1 week ago
D as AWS Systems Manager Patch Manager can patch the EC2 instances.
upvoted 1 times
Question #330 Topic 1
A company is planning to store data on Amazon RDS DB instances. The company must encrypt the data at rest. What should a solutions architect do to meet this requirement?
Create a key in AWS Key Management Service (AWS KMS). Enable encryption for the DB instances.
Create an encryption key. Store the key in AWS Secrets Manager. Use the key to encrypt the DB instances.
Generate a certificate in AWS Certificate Manager (ACM). Enable SSL/TLS on the DB instances by using the certificate.
Generate a certificate in AWS Identity and Access Management (IAM). Enable SSL/TLS on the DB instances by using the certificate.
Community vote distribution
A (100%)
antropaws 1 month ago OK, but why not B??? upvoted 1 times
SkyZeroZx 2 months ago
ANSWER - A
upvoted 1 times
datz 3 months ago
PRASAD180 3 months, 3 weeks ago
A is 100% Crt
upvoted 1 times
Steve_4542636 3 months, 3 weeks ago
Key Management Service. Secrets Manager is for database connection strings.
upvoted 3 times
LuckyAro 4 months ago
A is the correct solution to meet the requirement of encrypting the data at rest.
To encrypt data at rest in Amazon RDS, you can use the encryption feature of Amazon RDS, which uses AWS Key Management Service (AWS KMS). With this feature, Amazon RDS encrypts each database instance with a unique key. This key is stored securely by AWS KMS. You can manage your own keys or use the default AWS-managed keys. When you enable encryption for a DB instance, Amazon RDS encrypts the underlying storage, including the automated backups, read replicas, and snapshots.
upvoted 2 times
bdp123 4 months ago
AWS Key Management Service (KMS) is used to manage the keys used to encrypt and decrypt the data.
upvoted 1 times
pbpally 4 months, 1 week ago
NolaHOla 4 months, 1 week ago
A. Create a key in AWS Key Management Service (AWS KMS). Enable encryption for the DB instances is the correct answer to encrypt the data at rest in Amazon RDS DB instances.
Amazon RDS provides multiple options for encrypting data at rest. AWS Key Management Service (KMS) is used to manage the keys used to encrypt and decrypt the data. Therefore, a solution architect should create a key in AWS KMS and enable encryption for the DB instances to encrypt the data at rest.
upvoted 1 times
jennyka76 4 months, 1 week ago
ANSWER - A
https://docs.aws.amazon.com/whitepapers/latest/efs-encrypted-file-systems/managing-keys.html
upvoted 1 times
Bhawesh 4 months, 1 week ago
A. Create a key in AWS Key Management Service (AWS KMS). Enable encryption for the DB instances.
upvoted 2 times
Question #331 Topic 1
A company must migrate 20 TB of data from a data center to the AWS Cloud within 30 days. The company’s network bandwidth is limited to 15 Mbps and cannot exceed 70% utilization.
What should a solutions architect do to meet these requirements?
Use AWS Snowball.
Use AWS DataSync.
Use a secure VPN connection.
Use Amazon S3 Transfer Acceleration.
Community vote distribution
A (85%) B (15%)
kruasan 1 month, 4 weeks ago
Don't mix up between Mbps and Mbs. The proper calculation is:
10 MB/s x 86,400 seconds per day x 30 days/8 = 3,402,000 MB or approximately 3.4 TB
upvoted 4 times
UnluckyDucky 3 months, 1 week ago
10 MB/s x 86,400 seconds per day x 30 days = 25,920,000 MB or approximately 25.2 TB
That's how much you can transfer with a 10 Mbps link (roughly 70% of the 15 Mbps connection). With a consistent connection of 8~ Mbps, and 30 days, you can upload 20 TB of data.
My math says B, my brain wants to go with A. Take your pick.
upvoted 2 times
Zox42 3 months ago
15 Mbps * 0.7 = 1.3125 MB/s and 1.3125 * 86,400 * 30 = 3.402.000 MB
Answer A is correct.
upvoted 2 times
Zox42 3 months ago
3,402,000
upvoted 2 times
Bilalazure 4 months ago
PRASAD180 4 months ago
A is 100% Crt
upvoted 1 times
LuckyAro 4 months ago
pbpally 4 months, 1 week ago
jennyka76 4 months, 1 week ago
ANSWER - A
https://docs.aws.amazon.com/snowball/latest/ug/whatissnowball.html
upvoted 1 times
AWSSHA1 4 months, 1 week ago
Question #332 Topic 1
A company needs to provide its employees with secure access to confidential and sensitive files. The company wants to ensure that the files can be accessed only by authorized users. The files must be downloaded securely to the employees’ devices.
The files are stored in an on-premises Windows file server. However, due to an increase in remote usage, the file server is running out of capacity.
.
Which solution will meet these requirements?
Migrate the file server to an Amazon EC2 instance in a public subnet. Configure the security group to limit inbound traffic to the employees’ IP addresses.
Migrate the files to an Amazon FSx for Windows File Server file system. Integrate the Amazon FSx file system with the on-premises Active Directory. Configure AWS Client VPN.
Migrate the files to Amazon S3, and create a private VPC endpoint. Create a signed URL to allow download.
Migrate the files to Amazon S3, and create a public VPC endpoint. Allow employees to sign on with AWS IAM Identity Center (AWS Single Sign-On).
Community vote distribution
B (100%)
SkyZeroZx 1 month, 3 weeks ago
B is the correct answer
upvoted 1 times
elearningtakai 3 months ago
This solution addresses the need for secure access to confidential and sensitive files, as well as the increase in remote usage. Migrating the files to Amazon FSx for Windows File Server provides a scalable, fully managed file storage solution in the AWS Cloud that is accessible from on-premises and cloud environments. Integration with the on-premises Active Directory allows for a consistent user experience and centralized access control. AWS Client VPN provides a secure and managed VPN solution that can be used by employees to access the files securely.
upvoted 3 times
LuckyAro 4 months ago
B is the best solution for the given requirements. It provides a secure way for employees to access confidential and sensitive files from anywhere using AWS Client VPN. The Amazon FSx for Windows File Server file system is designed to provide native support for Windows file system features such as NTFS permissions, Active Directory integration, and Distributed File System (DFS). This means that the company can continue to use their on-premises Active Directory to manage user access to files.
upvoted 1 times
Bilalazure 4 months ago B is the correct answer upvoted 1 times
jennyka76 4 months, 1 week ago
Answer - B
https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html
https://docs.aws.amazon.com/fsx/latest/WindowsGuide/managing-storage-capacity.html
upvoted 1 times
Neha999 4 months, 1 week ago
B
Amazon FSx for Windows File Server file system
upvoted 2 times
Question #333 Topic 1
A company’s application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. On the first day of every month at midnight, the application becomes much slower when the
month-end financial calculation batch runs. This causes the CPU utilization of the EC2 instances to immediately peak to 100%, which disrupts the application.
What should a solutions architect recommend to ensure the application is able to handle the workload and avoid downtime?
A. Configure an Amazon CloudFront distribution in front of the ALB.
B. Configure an EC2 Auto Scaling simple scaling policy based on CPU utilization.
C. Configure an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule.
D. Configure Amazon ElastiCache to remove some of the workload from the EC2 instances.
Community vote distribution
C (100%)
elearningtakai 3 months ago
By configuring a scheduled scaling policy, the EC2 Auto Scaling group can proactively launch additional EC2 instances before the CPU utilization peaks to 100%. This will ensure that the application can handle the workload during the month-end financial calculation batch, and avoid any disruption or downtime.
Configuring a simple scaling policy based on CPU utilization or adding Amazon CloudFront distribution or Amazon ElastiCache will not directly address the issue of handling the monthly peak workload.
upvoted 1 times
Steve_4542636 3 months, 3 weeks ago
If the scaling were based on CPU or memory, it requires a certain amount of time above that threshhold, 5 minutes for example. That would mean the CPU would be at 100% for five minutes.
upvoted 2 times
LuckyAro 4 months ago
C: Configure an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule is the best option because it allows for the proactive scaling of the EC2 instances before the monthly batch run begins. This will ensure that the application is able to handle the increased workload without experiencing downtime. The scheduled scaling policy can be configured to increase the number of instances in the Auto Scaling group a few hours before the batch run and then decrease the number of instances after the batch run is complete. This will ensure that the resources are available when needed and not wasted when not needed.
The most appropriate solution to handle the increased workload during the monthly batch run and avoid downtime would be to configure an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule.
upvoted 2 times
LuckyAro 4 months ago
Scheduled scaling policies allow you to schedule EC2 instance scaling events in advance based on a specified time and date. You can use this feature to plan for anticipated traffic spikes or seasonal changes in demand. By setting up scheduled scaling policies, you can ensure that you have the right number of instances running at the right time, thereby optimizing performance and reducing costs.
To set up a scheduled scaling policy in EC2 Auto Scaling, you need to specify the following:
Start time and date: The date and time when the scaling event should begin.
Desired capacity: The number of instances that you want to have running after the scaling event.
Recurrence: The frequency with which the scaling event should occur. This can be a one-time event or a recurring event, such as daily or weekly.
upvoted 1 times
bdp123 4 months ago
C is the correct answer as traffic spike is known
upvoted 1 times
jennyka76 4 months, 1 week ago
ANSWER - C
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-scheduled-scaling.html
upvoted 2 times
Neha999 4 months, 1 week ago
C as the schedule of traffic spike is known beforehand.
upvoted 1 times
Question #334 Topic 1
A company wants to give a customer the ability to use on-premises Microsoft Active Directory to download files that are stored in Amazon S3. The customer’s application uses an SFTP client to download the files.
Which solution will meet these requirements with the LEAST operational overhead and no changes to the customer’s application?
A. Set up AWS Transfer Family with SFTP for Amazon S3. Configure integrated Active Directory authentication.
B. Set up AWS Database Migration Service (AWS DMS) to synchronize the on-premises client with Amazon S3. Configure integrated Active Directory authentication.
C. Set up AWS DataSync to synchronize between the on-premises location and the S3 location by using AWS IAM Identity Center (AWS Single Sign-On).
D. Set up a Windows Amazon EC2 instance with SFTP to connect the on-premises client with Amazon S3. Integrate AWS Identity and Access Management (IAM).
Community vote distribution
A (100%)
Steve_4542636 Highly Voted 3 months, 3 weeks ago
SFTP, FTP - think "Transfer" during test time
upvoted 5 times
antropaws Most Recent 1 month ago
A no doubt. Why the system gives B as the correct answer?
upvoted 1 times
lht 1 month, 3 weeks ago
just A
upvoted 1 times
LuckyAro 4 months ago
LuckyAro 4 months ago
AWS Transfer Family is a fully managed service that allows customers to transfer files over SFTP, FTPS, and FTP directly into and out of Amazon S3. It eliminates the need to manage any infrastructure for file transfer, which reduces operational overhead. Additionally, the service can be configured to use an existing Active Directory for authentication, which means that no changes need to be made to the customer's application.
upvoted 1 times
bdp123 4 months ago
Transfer family is used for SFTP
upvoted 1 times
TungPham 4 months, 1 week ago
using AWS Batch to LEAST operational overhead
and have SFTP to no changes to the customer’s application
https://aws.amazon.com/vi/blogs/architecture/managed-file-transfer-using-aws-transfer-family-and-amazon-s3/
upvoted 2 times
Bhawesh 4 months, 1 week ago
A. Set up AWS Transfer Family with SFTP for Amazon S3. Configure integrated Active Directory authentication. https://docs.aws.amazon.com/transfer/latest/userguide/directory-services-users.html
upvoted 3 times
Question #335 Topic 1
A company is experiencing sudden increases in demand. The company needs to provision large Amazon EC2 instances from an Amazon Machine Image (AMI). The instances will run in an Auto Scaling group. The company needs a solution that provides minimum initialization latency to meet the demand.
Which solution meets these requirements?
A. Use the aws ec2 register-image command to create an AMI from a snapshot. Use AWS Step Functions to replace the AMI in the Auto Scaling group.
B. Enable Amazon Elastic Block Store (Amazon EBS) fast snapshot restore on a snapshot. Provision an AMI by using the snapshot. Replace the AMI in the Auto Scaling group with the new AMI.
C. Enable AMI creation and define lifecycle rules in Amazon Data Lifecycle Manager (Amazon DLM). Create an AWS Lambda function that modifies the AMI in the Auto Scaling group.
D. Use Amazon EventBridge to invoke AWS Backup lifecycle policies that provision AMIs. Configure Auto Scaling group capacity limits as an event source in EventBridge.
Community vote distribution
B (87%) 13%
danielklein09 Highly Voted 3 weeks, 6 days ago
readed the question 5 times, didn't understood a thing :(
upvoted 5 times
antropaws Most Recent 1 month ago
elearningtakai 3 months ago
B: "EBS fast snapshot restore": minimizes initialization latency. This is a good choice.
upvoted 2 times
Zox42 3 months ago
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-fast-snapshot-restore.html
upvoted 2 times
geekgirl22 4 months ago
Keyword, minimize initilization latency == snapshot. A and B have snapshots in them, but B is the one that makes sense. C has DLP that can create machines from AMI, but that does not talk about latency and snapshots.
upvoted 3 times
LuckyAro 4 months ago
Enabling Amazon Elastic Block Store (Amazon EBS) fast snapshot restore on a snapshot allows for rapid restoration of EBS volumes from snapshots. This reduces the time required to create an AMI from a snapshot, which is useful for quickly provisioning large Amazon EC2 instances.
Provisioning an AMI by using the fast snapshot restore feature is a fast and efficient way to create an AMI. Once the AMI is created, it can be replaced in the Auto Scaling group without any downtime or disruption to running instances.
upvoted 1 times
bdp123 4 months, 1 week ago
Enabling Amazon Elastic Block Store (Amazon EBS) fast snapshot restore on a snapshot allows you to quickly create a new Amazon Machine Image (AMI) from a snapshot, which can help reduce the initialization latency when provisioning new instances. Once the AMI is provisioned, you can replace
the AMI in the Auto Scaling group with the new AMI. This will ensure that new instances are launched from the updated AMI and are able to meet the increased demand quickly.
upvoted 1 times
TungPham 4 months, 1 week ago
Provision an AMI by using the snapshot => not sure because SnapShot only backup a EBS, AMI is backup a cluster
. Replace the AMI in the Auto Scaling group with the new AMI. => for what ?? https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html
Amazon Data Lifecycle Manager helps automate snapshot and AMI management
upvoted 2 times
jennyka76 4 months, 1 week ago
agree with answer - B
upvoted 1 times
kpato87 4 months, 1 week ago
Option B is the most suitable solution for this use case, as it enables Amazon Elastic Block Store (Amazon EBS) fast snapshot restore on a snapshot, which significantly reduces the time required for creating an AMI from the snapshot. The fast snapshot restore feature enables Amazon EBS to pre-warm the EBS volumes associated with the snapshot, which reduces the time required to initialize the volumes when launching instances from the AMI.
upvoted 2 times
Neha999 4 months, 1 week ago
upvoted 1 times
bdp123 4 months, 1 week ago
Enabling Amazon Elastic Block Store (Amazon EBS) fast snapshot restore on a snapshot allows you to quickly create a new Amazon Machine Image (AMI) from a snapshot, which can help reduce the initialization latency when provisioning new instances. Once the AMI is provisioned, you can replace the AMI in the Auto Scaling group with the new AMI. This will ensure that new instances are launched from the updated AMI and are able to meet the increased demand quickly.
upvoted 4 times
Question #336 Topic 1
A company hosts a multi-tier web application that uses an Amazon Aurora MySQL DB cluster for storage. The application tier is hosted on Amazon EC2 instances. The company’s IT security guidelines mandate that the database credentials be encrypted and rotated every 14 days.
What should a solutions architect do to meet this requirement with the LEAST operational effort?
A. Create a new AWS Key Management Service (AWS KMS) encryption key. Use AWS Secrets Manager to create a new secret that uses the KMS key with the appropriate credentials. Associate the secret with the Aurora DB cluster. Configure a custom rotation period of 14 days.
B. Create two parameters in AWS Systems Manager Parameter Store: one for the user name as a string parameter and one that uses the
SecureString type for the password. Select AWS Key Management Service (AWS KMS) encryption for the password parameter, and load these parameters in the application tier. Implement an AWS Lambda function that rotates the password every 14 days.
C. Store a file that contains the credentials in an AWS Key Management Service (AWS KMS) encrypted Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system in all EC2 instances of the application tier. Restrict the access to the file on the file system so that the application can read the file and that only super users can modify the file. Implement an AWS Lambda function that rotates the key in
Aurora every 14 days and writes new credentials into the file.
D. Store a file that contains the credentials in an AWS Key Management Service (AWS KMS) encrypted Amazon S3 bucket that the application uses to load the credentials. Download the file to the application regularly to ensure that the correct credentials are used. Implement an AWS Lambda function that rotates the Aurora credentials every 14 days and uploads these credentials to the file in the S3 bucket.
Community vote distribution
A (100%)
elearningtakai 3 months ago
AWS Secrets Manager allows you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. With this service, you can automate the rotation of secrets, such as database credentials, on a schedule that you choose. The solution allows you to create a new secret with the appropriate credentials and associate it with the Aurora DB cluster. You can then configure a custom rotation period of 14 days to ensure that the credentials are automatically rotated every two weeks, as required by the IT security guidelines. This approach requires the least amount of operational effort as it allows you to manage secrets centrally without modifying your application code or infrastructure.
upvoted 2 times
elearningtakai 3 months ago
A: AWS Secrets Manager. Simply this supported rotate feature, and secure to store credentials instead of EFS or S3.
upvoted 1 times
Steve_4542636 3 months, 3 weeks ago
LuckyAro 4 months ago
A proposes to create a new AWS KMS encryption key and use AWS Secrets Manager to create a new secret that uses the KMS key with the appropriate credentials. Then, the secret will be associated with the Aurora DB cluster, and a custom rotation period of 14 days will be configured. AWS Secrets Manager will automate the process of rotating the database credentials, which will reduce the operational effort required to meet the IT security guidelines.
upvoted 1 times
jennyka76 4 months, 1 week ago
Answer is A
To implement password rotation lifecycles, use AWS Secrets Manager. You can rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle using Secrets Manager.
https://aws.amazon.com/blogs/security/how-to-use-aws-secrets-manager-rotate-credentials-amazon-rds-database-types-oracle/
upvoted 3 times
Neha999 4 months, 1 week ago
A
upvoted 1 times
Question #337 Topic 1
A company has deployed a web application on AWS. The company hosts the backend database on Amazon RDS for MySQL with a primary DB
instance and five read replicas to support scaling needs. The read replicas must lag no more than 1 second behind the primary DB instance. The database routinely runs scheduled stored procedures.
As traffic on the website increases, the replicas experience additional lag during periods of peak load. A solutions architect must reduce the replication lag as much as possible. The solutions architect must minimize changes to the application code and must minimize ongoing
operational overhead.
Which solution will meet these requirements?
A. Migrate the database to Amazon Aurora MySQL. Replace the read replicas with Aurora Replicas, and configure Aurora Auto Scaling. Replace the stored procedures with Aurora MySQL native functions.
B. Deploy an Amazon ElastiCache for Redis cluster in front of the database. Modify the application to check the cache before the application queries the database. Replace the stored procedures with AWS Lambda functions.
C. Migrate the database to a MySQL database that runs on Amazon EC2 instances. Choose large, compute optimized EC2 instances for all replica nodes. Maintain the stored procedures on the EC2 instances.
D. Migrate the database to Amazon DynamoDB. Provision a large number of read capacity units (RCUs) to support the required throughput, and configure on-demand capacity scaling. Replace the stored procedures with DynamoDB streams.
Community vote distribution
A (64%) B (36%)
fkie4 Highly Voted 3 months, 3 weeks ago
i hate this kind of question
upvoted 12 times
MrAWSAssociate Most Recent 1 week ago
First, Elasticache involves heavy change on application code. The question mentioned that "he solutions architect must minimize changes to the application code". Therefore B is not suitable and A is more appropriate for the question requirement.
upvoted 1 times
KMohsoe 1 month, 1 week ago
Why not B? Please explain to me.
upvoted 2 times
asoli 3 months, 1 week ago
Using Cache required huge changes in the application. Several things need to change to use cache in front of the DB in the application. So, option B is not correct.
Aurora will help to reduce replication lag for read replica
upvoted 4 times
kaushald 3 months, 2 weeks ago
Option A is the most appropriate solution for reducing replication lag without significant changes to the application code and minimizing ongoing operational overhead. Migrating the database to Amazon Aurora MySQL allows for improved replication performance and higher scalability compared to Amazon RDS for MySQL. Aurora Replicas provide faster replication, reducing the replication lag, and Aurora Auto Scaling ensures that there are enough Aurora Replicas to handle the incoming traffic. Additionally, Aurora MySQL native functions can replace the stored procedures, reducing the load on the database and improving performance.
Option B is not the best solution since adding an ElastiCache for Redis cluster does not address the replication lag issue, and the cache may not have the most up-to-date information. Additionally, replacing the stored procedures with AWS Lambda functions adds additional complexity and may not improve performance.
upvoted 3 times
taehyeki 3 months, 3 weeks ago
a,b are confusing me..
i would like to go with b..
upvoted 1 times
bangfire 3 months, 2 weeks ago
Option B is incorrect because it suggests using ElastiCache for Redis as a caching layer in front of the database, but this would not necessarily reduce the replication lag on the read replicas. Additionally, it suggests replacing the stored procedures with AWS Lambda functions, which may require significant changes to the application code.
upvoted 4 times
lizzard812 3 months ago
Yes and moreover Redis requires app refactoring which is a solid operational overhead
upvoted 1 times
Nel8 4 months ago
By using ElastiCache you avoid a lot of common issues you might encounter. ElastiCache is a database caching solution. ElastiCache Redis per se, supports failover and Multi-AZ. And Most of all, ElastiCache is well suited to place in front of RDS.
Migrating a database such as option A, requires operational overhead.
upvoted 2 times
bdp123 4 months ago
Aurora can have up to 15 read replicas - much faster than RDS https://aws.amazon.com/rds/aurora/
upvoted 4 times
ChrisG1454 3 months, 3 weeks ago
" As a result, all Aurora Replicas return the same data for query results with minimal replica lag. This lag is usually much less than 100 milliseconds after the primary instance has written an update "
Reference: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html
upvoted 2 times
ChrisG1454 3 months, 2 weeks ago
You can invoke an Amazon Lambda function from an Amazon Aurora MySQL-Compatible Edition DB cluster with the "native function"....
https://docs.amazonaws.cn/en_us/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.Lambda.html
upvoted 1 times
jennyka76 4 months, 1 week ago
Answer - A https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PostgreSQL.Replication.ReadReplicas.html
You can scale reads for your Amazon RDS for PostgreSQL DB instance by adding read replicas to the instance. As with other Amazon RDS database engines, RDS for PostgreSQL uses the native replication mechanisms of PostgreSQL to keep read replicas up to date with changes on the source DB. For general information about read replicas and Amazon RDS, see Working with read replicas.
upvoted 3 times
Question #338 Topic 1
A solutions architect must create a disaster recovery (DR) plan for a high-volume software as a service (SaaS) platform. All data for the platform is stored in an Amazon Aurora MySQL DB cluster.
The DR plan must replicate data to a secondary AWS Region.
Which solution will meet these requirements MOST cost-effectively?
Use MySQL binary log replication to an Aurora cluster in the secondary Region. Provision one DB instance for the Aurora cluster in the secondary Region.
Set up an Aurora global database for the DB cluster. When setup is complete, remove the DB instance from the secondary Region.
Use AWS Database Migration Service (AWS DMS) to continuously replicate data to an Aurora cluster in the secondary Region. Remove the DB instance from the secondary Region.
Set up an Aurora global database for the DB cluster. Specify a minimum of one DB instance in the secondary Region.
Community vote distribution
D (56%) B (17%) A (17%) 11%
jennyka76 Highly Voted 4 months, 1 week ago
Answer - A https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.CrossRegion.html
Before you begin
Before you can create an Aurora MySQL DB cluster that is a cross-Region read replica, you must turn on binary logging on your source Aurora MySQL DB cluster. Cross-region replication for Aurora MySQL uses MySQL binary replication to replay changes on the cross-Region read replica DB cluster.
upvoted 8 times
ChrisG1454 3 months, 3 weeks ago
The question states " The DR plan must replicate data to a "secondary" AWS Region."
In addition to Aurora Replicas, you have the following options for replication with Aurora MySQL:
Aurora MySQL DB clusters in different AWS Regions.
You can replicate data across multiple Regions by using an Aurora global database. For details, see High availability across AWS Regions with Aurora global databases
You can create an Aurora read replica of an Aurora MySQL DB cluster in a different AWS Region, by using MySQL binary log (binlog) replication. Each cluster can have up to five read replicas created this way, each in a different Region.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html
upvoted 1 times
ChrisG1454 3 months, 3 weeks ago
The question is asking for the most cost-effective solution. Aurora global databases are more expensive.
https://aws.amazon.com/rds/aurora/pricing/
upvoted 1 times
leoattf 4 months ago
On this same URL you provided, there is a note highlighted, stating the following:
"Replication from the primary DB cluster to all secondaries is handled by the Aurora storage layer rather than by the database engine, so lag time for replicating changes is minimal—typically, less than 1 second. Keeping the database engine out of the replication process means that the database engine is dedicated to processing workloads. It also means that you don't need to configure or manage the Aurora MySQL binlog (binary logging) replication."
So, answer should be A
upvoted 1 times
leoattf 4 months ago
Correction: So, answer should be D
upvoted 1 times
Most Recent
MOST cost-effective --> B
See section "Creating a headless Aurora DB cluster in a secondary Region" on the link https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html
"Although an Aurora global database requires at least one secondary Aurora DB cluster in a different AWS Region than the primary, you can use a headless configuration for the secondary cluster. A headless secondary Aurora DB cluster is one without a DB instance. This type of configuration can lower expenses for an Aurora global database. In an Aurora DB cluster, compute and storage are decoupled. Without the DB instance, you're not charged for compute, only for storage. If it's set up correctly, a headless secondary's storage volume is kept in-sync with the primary Aurora DB cluster."
upvoted 3 times
Abhineet9148232 3 months, 3 weeks ago
D: With Amazon Aurora Global Database, you pay for replicated write I/Os between the primary Region and each secondary Region (in this case 1).
Not A because it achieves the same, would be equally costly and adds overhead.
upvoted 2 times
[Removed] 3 months, 3 weeks ago
CCCCCC
upvoted 2 times
Steve_4542636 3 months, 3 weeks ago
I think Amazon is looking for D here. I don' think A is intended because that would require knowledge of MySQL, which isn't what they are testing us on. Not option C because the question states large volume. If the volume were low, then DMS would be better. This question is not a good question.
upvoted 3 times
very true. Amazon wanna everyone to use AWS, why do they sell for MySQL?
upvoted 1 times
D provides automatic replication
upvoted 3 times
D provides automatic replication to a secondary Region through the Aurora global database feature. This feature provides automatic replication of data across AWS Regions, with the ability to control and configure the replication process. By specifying a minimum of one DB instance in the secondary Region, you can ensure that your secondary database is always available and up-to-date, allowing for quick failover in the event of a disaster.
upvoted 2 times
Actually I change my answer to 'D' because of following:
An Aurora DB cluster can contain up to 15 Aurora Replicas. The Aurora Replicas can be distributed across the Availability Zones that a DB cluster spans WITHIN an AWS Region. https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.htmhttps://docs.aws.amazon.com/AmazonRDS/latest/Auror aUserGuide/Aurora.Replication.html
You can replicate data across multiple Regions by using an Aurora global database
upvoted 1 times
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.MySQL.html Global database is for specific versions -they did not tell us the version
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html
upvoted 1 times
doodledreads 4 months, 1 week ago
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html
Checkout the part Recovery from Region-wide outages
upvoted 1 times
Answer is A
upvoted 2 times
Question #339 Topic 1
A company has a custom application with embedded credentials that retrieves information from an Amazon RDS MySQL DB instance. Management says the application must be made more secure with the least amount of programming effort.
What should a solutions architect do to meet these requirements?
Use AWS Key Management Service (AWS KMS) to create keys. Configure the application to load the database credentials from AWS KMS. Enable automatic key rotation.
Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Secrets Manager. Configure the application to load the database credentials from Secrets Manager. Create an AWS Lambda function that rotates the credentials in Secret Manager.
Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Secrets Manager. Configure the application to load the database credentials from Secrets Manager. Set up a credentials rotation schedule for the application user in the RDS for MySQL database using Secrets Manager.
Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Systems Manager Parameter Store. Configure the application to load the database credentials from Parameter Store. Set up a credentials rotation schedule for the
application user in the RDS for MySQL database using Parameter Store.
Community vote distribution
C (100%)
Bhawesh Highly Voted 4 months, 1 week ago
C. Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Secrets Manager. Configure the application to load the database credentials from Secrets Manager. Set up a credentials rotation schedule for the application user in the RDS for MySQL database using Secrets Manager.
upvoted 8 times
cloudbusting Highly Voted 4 months, 1 week ago
Parameter Store does not provide automatic credential rotation.
upvoted 8 times
Abrar2022 Most Recent 2 weeks, 3 days ago
If you need your DB to store credentials then use AWS Secret Manager. System Manager Paramater Store is for CloudFormation (no rotation)
upvoted 1 times
AlessandraSAA 3 months, 3 weeks ago
why it's not A?
upvoted 3 times
MssP 3 months ago
It is asking for credentials, not for encryption keys.
upvoted 4 times
PoisonBlack 1 month, 3 weeks ago
So credentials rotation is secrets manager and key rotation is KMS?
upvoted 1 times
bdp123 4 months ago
https://aws.amazon.com/blogs/security/rotate-amazon-rds-database-credentials-automatically-with-aws-secrets-manager/
upvoted 1 times
LuckyAro 4 months ago
C is a valid solution for securing the custom application with the least amount of programming effort. It involves creating credentials on the RDS for MySQL database for the application user and storing them in AWS Secrets Manager. The application can then be configured to load the database credentials from Secrets Manager. Additionally, the solution includes setting up a credentials rotation schedule for the application user in
the RDS for MySQL database using Secrets Manager, which will automatically rotate the credentials at a specified interval without requiring any programming effort.
upvoted 2 times
bdp123 4 months ago
jennyka76 4 months, 1 week ago
Answer - C
https://ws.amazon.com/blogs/security/rotate-amazon-rds-database-credentials-automatically-with-aws-secrets-manager/
upvoted 3 times
Question #340 Topic 1
A media company hosts its website on AWS. The website application’s architecture includes a fleet of Amazon EC2 instances behind an
Application Load Balancer (ALB) and a database that is hosted on Amazon Aurora. The company’s cybersecurity team reports that the application is vulnerable to SQL injection.
How should the company resolve this issue?
A. Use AWS WAF in front of the ALB. Associate the appropriate web ACLs with AWS WAF.
B. Create an ALB listener rule to reply to SQL injections with a fixed response.
C. Subscribe to AWS Shield Advanced to block all SQL injection attempts automatically.
D. Set up Amazon Inspector to block all SQL injection attempts automatically.
Community vote distribution
A (100%)
Bhawesh Highly Voted 4 months, 1 week ago
A. Use AWS WAF in front of the ALB. Associate the appropriate web ACLs with AWS WAF.
SQL Injection - AWS WAF DDoS - AWS Shield
upvoted 15 times
jennyka76 Highly Voted 4 months, 1 week ago
Answer - A
https://aws.amazon.com/premiumsupport/knowledge-center/waf-block-common-attacks/#:~:text=To%20protect%20your%20applications%20against,%2C%20query%20string%2C%20or%20URI.
Protect against SQL injection and cross-site scripting
To protect your applications against SQL injection and cross-site scripting (XSS) attacks, use the built-in SQL injection and cross-site scripting engines. Remember that attacks can be performed on different parts of the HTTP request, such as the HTTP header, query string, or URI. Configure the AWS WAF rules to inspect different parts of the HTTP request against the built-in mitigation engines.
upvoted 6 times
KMohsoe Most Recent 1 month, 1 week ago
SQL injection -> WAF
upvoted 1 times
lexotan 2 months, 1 week ago
WAF is the right one
upvoted 1 times
akram_akram 2 months, 3 weeks ago
SQL Injection - AWS WAF DDoS - AWS Shield
upvoted 1 times
movva12 3 months, 1 week ago
Answer C - Shield Advanced (WAF + Firewall Manager)
upvoted 1 times
fkie4 3 months, 3 weeks ago
It is A. I am happy to see Amazon gives out score like this...
upvoted 2 times
LuckyAro 4 months ago
AWS WAF is a managed service that protects web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. AWS WAF enables customers to create custom rules that block common attack patterns, such as SQL
injection attacks.
By using AWS WAF in front of the ALB and associating the appropriate web ACLs with AWS WAF, the company can protect its website application from SQL injection attacks. AWS WAF will inspect incoming traffic to the website application and block requests that match the defined SQL injection patterns in the web ACLs. This will help to prevent SQL injection attacks from reaching the application, thereby improving the overall security posture of the application.
upvoted 2 times
LuckyAro 4 months ago
B, C, and D are not the best solutions for this issue. Replying to SQL injections with a fixed response
is not a recommended approach as it does not actually fix the vulnerability, but only masks the issue. Subscribing to AWS Shield Advanced
is useful to protect against DDoS attacks but does not protect against SQL injection vulnerabilities. Amazon Inspector
is a vulnerability assessment tool and can identify vulnerabilities but cannot block attacks in real-time.
upvoted 2 times
pbpally 4 months, 1 week ago
Bhawesh answers it perfect so I'm avoiding redundancy but agree on it being A.
upvoted 2 times
Question #341 Topic 1
A company has an Amazon S3 data lake that is governed by AWS Lake Formation. The company wants to create a visualization in Amazon
QuickSight by joining the data in the data lake with operational data that is stored in an Amazon Aurora MySQL database. The company wants to enforce column-level authorization so that the company’s marketing team can access only a subset of columns in the database.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon EMR to ingest the data directly from the database to the QuickSight SPICE engine. Include only the required columns.
B. Use AWS Glue Studio to ingest the data from the database to the S3 data lake. Attach an IAM policy to the QuickSight users to enforce column-level access control. Use Amazon S3 as the data source in QuickSight.
C. Use AWS Glue Elastic Views to create a materialized view for the database in Amazon S3. Create an S3 bucket policy to enforce column-level access control for the QuickSight users. Use Amazon S3 as the data source in QuickSight.
D. Use a Lake Formation blueprint to ingest the data from the database to the S3 data lake. Use Lake Formation to enforce column-level access control for the QuickSight users. Use Amazon Athena as the data source in QuickSight.
Community vote distribution
D (100%)
K0nAn Highly Voted 4 months, 1 week ago
This solution leverages AWS Lake Formation to ingest data from the Aurora MySQL database into the S3 data lake, while enforcing column-level access control for QuickSight users. Lake Formation can be used to create and manage the data lake's metadata and enforce security and governance policies, including column-level access control. This solution then uses Amazon Athena as the data source in QuickSight to query the data in the S3 data lake. This solution minimizes operational overhead by leveraging AWS services to manage and secure the data, and by using a standard query service (Amazon Athena) to provide a SQL interface to the data.
upvoted 6 times
jennyka76 Highly Voted 4 months, 1 week ago
Answer - D
https://aws.amazon.com/blogs/big-data/enforce-column-level-authorization-with-amazon-quicksight-and-aws-lake-formation/
upvoted 5 times
LuckyAro Most Recent 4 months ago
Using a Lake Formation blueprint to ingest the data from the database to the S3 data lake, using Lake Formation to enforce column-level access control for the QuickSight users, and using Amazon Athena as the data source in QuickSight. This solution requires the least operational overhead as it utilizes the features provided by AWS Lake Formation to enforce column-level authorization, which simplifies the process and reduces the need for additional configuration and maintenance.
upvoted 3 times
Bhawesh 4 months, 1 week ago
D. Use a Lake Formation blueprint to ingest the data from the database to the S3 data lake. Use Lake Formation to enforce column-level access control for the QuickSight users. Use Amazon Athena as the data source in QuickSight.
upvoted 2 times
Question #342 Topic 1
A transaction processing company has weekly scripted batch jobs that run on Amazon EC2 instances. The EC2 instances are in an Auto Scaling group. The number of transactions can vary, but the baseline CPU utilization that is noted on each run is at least 60%. The company needs to provision the capacity 30 minutes before the jobs run.
Currently, engineers complete this task by manually modifying the Auto Scaling group parameters. The company does not have the resources to
analyze the required capacity trends for the Auto Scaling group counts. The company needs an automated way to modify the Auto Scaling group’s desired capacity.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create a dynamic scaling policy for the Auto Scaling group. Configure the policy to scale based on the CPU utilization metric. Set the target value for the metric to 60%.
B. Create a scheduled scaling policy for the Auto Scaling group. Set the appropriate desired capacity, minimum capacity, and maximum capacity. Set the recurrence to weekly. Set the start time to 30 minutes before the batch jobs run.
C. Create a predictive scaling policy for the Auto Scaling group. Configure the policy to scale based on forecast. Set the scaling metric to CPU utilization. Set the target value for the metric to 60%. In the policy, set the instances to pre-launch 30 minutes before the jobs run.
D. Create an Amazon EventBridge event to invoke an AWS Lambda function when the CPU utilization metric value for the Auto Scaling group reaches 60%. Configure the Lambda function to increase the Auto Scaling group’s desired capacity and maximum capacity by 20%.
Community vote distribution
C (65%) B (29%) 6%
fkie4 Highly Voted 3 months, 2 weeks ago
B is NOT correct. the question said "The company does not have the resources to analyze the required capacity trends for the Auto Scaling group counts.".
answer B said "Set the appropriate desired capacity, minimum capacity, and maximum capacity". how can someone set desired capacity if he has no resources to analyze the required capacity.
Read carefully Amigo
upvoted 8 times
omoakin 1 month ago scheduled scaling.... upvoted 1 times
ealpuche 1 month, 2 weeks ago
But you can make a vague estimation according to the resources used; you don't need to make machine learning models to do that. You only need common sense.
upvoted 1 times
MrAWSAssociate Most Recent 1 week ago
Abrar2022 2 weeks, 3 days ago
if the baseline CPU utilization is 60%, then that's enough information needed to determaine you to predict some aspect of the usage in the future. So key word "predictive" judging by past usage.
upvoted 1 times
omoakin 1 month ago
BBBBBBBBBBBBB
upvoted 1 times
ealpuche 1 month, 2 weeks ago
B.
you can make a vague estimation according to the resources used; you don't need to make machine-learning models to do that. You only need common sense.
upvoted 1 times
Use predictive scaling to increase the number of EC2 instances in your Auto Scaling group in advance of daily and weekly patterns in traffic flows.
Predictive scaling is well suited for situations where you have:
Cyclical traffic, such as high use of resources during regular business hours and low use of resources during evenings and weekends Recurring on-and-off workload patterns, such as batch processing, testing, or periodic data analysis
Applications that take a long time to initialize, causing a noticeable latency impact on application performance during scale-out events https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-predictive-scaling.html
upvoted 1 times
A scheduled scaling policy allows you to set up specific times for your Auto Scaling group to scale out or scale in. By creating a scheduled scaling policy for the Auto Scaling group, you can set the appropriate desired capacity, minimum capacity, and maximum capacity, and set the recurrence to weekly. You can then set the start time to 30 minutes before the batch jobs run, ensuring that the required capacity is provisioned before the jobs run.
Option C, creating a predictive scaling policy for the Auto Scaling group, is not necessary in this scenario since the company does not have the resources to analyze the required capacity trends for the Auto Scaling group counts. This would require analyzing the required capacity trends for the Auto Scaling group counts to determine the appropriate scaling policy.
upvoted 3 times
[Removed] 2 months, 4 weeks ago
(typo above) C is correct..
upvoted 1 times
[Removed] 2 months, 4 weeks ago
B is correct. "Predictive scaling uses machine learning to predict capacity requirements based on historical data from CloudWatch.", meaning the company does not have to analyze the capacity trends themselves. https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-predictive-scaling.html
upvoted 1 times
Look at fkie4 comment... no way to know desired capacity!!! -> B not correct
upvoted 1 times
the text says
1.-"A transaction processing company has weekly scripted batch jobs", there is a Schedule
2.-" The company does not have the resources to analyze the required capacity trends for the Auto Scaling " Do not use the answer is B
upvoted 1 times
The second part of the question invalidates option B, they don't know how to procure requirements and need something to do it for them, therefore C.
upvoted 1 times
In general, if you have regular patterns of traffic increases and applications that take a long time to initialize, you should consider using predictive scaling. Predictive scaling can help you scale faster by launching capacity in advance of forecasted load, compared to using only dynamic scaling, which is reactive in nature.
upvoted 2 times
WherecanIstart 3 months, 2 weeks ago
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-predictive-scaling.html
upvoted 2 times
UnluckyDucky 3 months, 2 weeks ago
"The company does not have the resources to analyze the required capacity trends for the Auto Scaling group counts"
Using predictive schedule seems appropriate here, however the question says the company doesn't have the resources to analyze this, even though forecast does it for you using ML.
The job runs weekly therefore the easiest way to achieve this with the LEAST operational overhead, seems to be scheduled scaling. Both solutions achieve the goal, B imho does it better, considering the limitations.
Predictive Scaling:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-predictive-scaling.html Scheduled Scaling:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-scheduled-scaling.html
upvoted 2 times
samcloudaws 3 months, 3 weeks ago
Scheduled scaling seems mostly simplest way to solve this
upvoted 3 times
Steve_4542636 3 months, 3 weeks ago
"The company needs to provision the capacity 30 minutes before the jobs run." This means the ASG group needs to scale BEFORE the CPU utilization hits 60%. Dynamic scaling only responds to a scaling metric setup such as average CPU utilization at 60% for 5 minutes. The forecasting option is automatic, however, it does require some time for it to be effective since it needs the EC2 utilization in the past to predict the future.
upvoted 2 times
nder 4 months ago
Dynamic Scaling policy is the least operational overhead.
upvoted 1 times
dpmahendra 4 months ago
B Scheduled scaling
upvoted 2 times
dpmahendra 4 months ago
C: Use predictive scaling to increase the number of EC2 instances in your Auto Scaling group in advance of daily and weekly patterns in traffic flows.
upvoted 1 times
LuckyAro 4 months ago
This solution automates the capacity provisioning process based on the actual workload, without requiring any manual intervention. With dynamic scaling, the Auto Scaling group will automatically adjust the number of instances based on the actual workload. The target value for the CPU utilization metric is set to 60%, which is the baseline CPU utilization that is noted on each run, indicating that this is a reasonable level of utilization for the workload. This solution does not require any scheduling or forecasting, reducing the operational overhead.
upvoted 1 times
MssP 3 months ago
What about provision Capacity 30 minutes before?? Only B C make this, no?
upvoted 1 times
bdp123 4 months ago
Question #343 Topic 1
A solutions architect is designing a company’s disaster recovery (DR) architecture. The company has a MySQL database that runs on an Amazon EC2 instance in a private subnet with scheduled backup. The DR design needs to include multiple AWS Regions.
Which solution will meet these requirements with the LEAST operational overhead?
A. Migrate the MySQL database to multiple EC2 instances. Configure a standby EC2 instance in the DR Region. Turn on replication.
B. Migrate the MySQL database to Amazon RDS. Use a Multi-AZ deployment. Turn on read replication for the primary DB instance in the different Availability Zones.
C. Migrate the MySQL database to an Amazon Aurora global database. Host the primary DB cluster in the primary Region. Host the secondary DB cluster in the DR Region.
D. Store the scheduled backup of the MySQL database in an Amazon S3 bucket that is configured for S3 Cross-Region Replication (CRR). Use the data backup to restore the database in the DR Region.
Community vote distribution
C (100%)
GalileoEC2 3 months ago
C, Why B? B is multi zone in one region, C is multi region as it was requested
upvoted 1 times
lucdt4 1 month ago
" The DR design needs to include multiple AWS Regions."
with the requirement "DR SITE multiple AWS region" -> B is wrong, because it deploy multy AZ (this is not multi region)
upvoted 1 times
AlessandraSAA 3 months, 3 weeks ago
A. Multiple EC2 instances to be configured and updated manually in case of DR.
B. Amazon RDS=Multi-AZ while it asks to be multi-region
C. correct, see comment from LuckyAro
D. Manual process to start the DR, therefore same limitation as answer A
upvoted 4 times
KZM 4 months ago
Amazon Aurora global database can span and replicate DB Servers between multiple AWS Regions. And also compatible with MySQL.
upvoted 3 times
LuckyAro 4 months ago
C: Migrate MySQL database to an Amazon Aurora global database is the best solution because it requires minimal operational overhead. Aurora is a managed service that provides automatic failover, so standby instances do not need to be manually configured. The primary DB cluster can be hosted in the primary Region, and the secondary DB cluster can be hosted in the DR Region. This approach ensures that the data is always available and up-to-date in multiple Regions, without requiring significant manual intervention.
upvoted 3 times
LuckyAro 4 months ago
With dynamic scaling, the Auto Scaling group will automatically adjust the number of instances based on the actual workload. The target value for the CPU utilization metric is set to 60%, which is the baseline CPU utilization that is noted on each run, indicating that this is a reasonable level of utilization for the workload. This solution does not require any scheduling or forecasting, reducing the operational overhead.
upvoted 1 times
LuckyAro 4 months ago
Sorry, Posted right answer to the wrong question, mistakenly clicked the next question, sorry.
upvoted 3 times
geekgirl22 4 months ago
C is the answer as RDS is only multi-zone not multi region.
upvoted 1 times
bdp123 4 months ago
SMAZ 4 months ago
C
option A has operation overhead whereas option C not.
upvoted 1 times
alexman 4 months, 1 week ago
C mentions multiple regions. Option B is within the same region
upvoted 3 times
jennyka76 4 months, 1 week ago
ANSWER - B ?? NOT SURE
upvoted 1 times
Question #344 Topic 1
A company has a Java application that uses Amazon Simple Queue Service (Amazon SQS) to parse messages. The application cannot parse
messages that are larger than 256 KB in size. The company wants to implement a solution to give the application the ability to parse messages as large as 50 MB.
Which solution will meet these requirements with the FEWEST changes to the code?
A. Use the Amazon SQS Extended Client Library for Java to host messages that are larger than 256 KB in Amazon S3.
B. Use Amazon EventBridge to post large messages from the application instead of Amazon SQS.
C. Change the limit in Amazon SQS to handle messages that are larger than 256 KB.
D. Store messages that are larger than 256 KB in Amazon Elastic File System (Amazon EFS). Configure Amazon SQS to reference this location in the messages.
Community vote distribution
A (100%)
LuckyAro Highly Voted 4 months ago
A. Use the Amazon SQS Extended Client Library for Java to host messages that are larger than 256 KB in Amazon S3.
Amazon SQS has a limit of 256 KB for the size of messages. To handle messages larger than 256 KB, the Amazon SQS Extended Client Library for Java can be used. This library allows messages larger than 256 KB to be stored in Amazon S3 and provides a way to retrieve and process them. Using this solution, the application code can remain largely unchanged while still being able to process messages up to 50 MB in size.
upvoted 5 times
Abrar2022 Most Recent 2 weeks, 3 days ago
Amazon SQS has a limit of 256 KB for the size of messages.
To handle messages larger than 256 KB, the Amazon SQS Extended Client Library for Java can be used.
upvoted 1 times
gold4otas 3 months ago
The Amazon SQS Extended Client Library for Java enables you to publish messages that are greater than the current SQS limit of 256 KB, up to a maximum of 2 GB.
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-s3-messages.html
upvoted 1 times
bdp123 4 months ago
https://github.com/awslabs/amazon-sqs-java-extended-client-lib
upvoted 3 times
Arathore 4 months, 1 week ago
To send messages larger than 256 KiB, you can use the Amazon SQS Extended Client Library for Java. This library allows you to send an Amazon SQS message that contains a reference to a message payload in Amazon S3. The maximum payload size is 2 GB.
upvoted 4 times
Neha999 4 months, 1 week ago
A
For messages > 256 KB, use Amazon SQS Extended Client Library for Java https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/quotas-messages.html
upvoted 4 times
Question #345 Topic 1
A company wants to restrict access to the content of one of its main web applications and to protect the content by using authorization
techniques available on AWS. The company wants to implement a serverless architecture and an authentication solution for fewer than 100 users. The solution needs to integrate with the main web application and serve web content globally. The solution must also scale as the company's user base grows while providing the lowest login latency possible.
Which solution will meet these requirements MOST cost-effectively?
A. Use Amazon Cognito for authentication. Use Lambda@Edge for authorization. Use Amazon CloudFront to serve the web application globally.
B. Use AWS Directory Service for Microsoft Active Directory for authentication. Use AWS Lambda for authorization. Use an Application Load Balancer to serve the web application globally.
C. Use Amazon Cognito for authentication. Use AWS Lambda for authorization. Use Amazon S3 Transfer Acceleration to serve the web application globally.
D. Use AWS Directory Service for Microsoft Active Directory for authentication. Use Lambda@Edge for authorization. Use AWS Elastic Beanstalk to serve the web application globally.
Community vote distribution
A (100%)
Lonojack Highly Voted 4 months ago
CloudFront=globally
Lambda@edge = Authorization/ Latency Cognito=Authentication for Web apps
upvoted 8 times
antropaws Most Recent 1 month ago
kraken21 2 months, 4 weeks ago
Lambda@Edge for authorization
https://aws.amazon.com/blogs/networking-and-content-delivery/adding-http-security-headers-using-lambdaedge-and-amazon-cloudfront/
upvoted 2 times
LuckyAro 4 months ago
Amazon CloudFront is a global content delivery network (CDN) service that can securely deliver web content, videos, and APIs at scale. It integrates with Cognito for authentication and with Lambda@Edge for authorization, making it an ideal choice for serving web content globally.
Lambda@Edge is a service that lets you run AWS Lambda functions globally closer to users, providing lower latency and faster response times. It can also handle authorization logic at the edge to secure content in CloudFront. For this scenario, Lambda@Edge can provide authorization for the web application while leveraging the low-latency benefit of running at the edge.
upvoted 2 times
bdp123 4 months ago
CloudFront to serve globally
upvoted 1 times
SMAZ 4 months ago
A
Amazon Cognito for authentication and Lambda@Edge for authorizatioN, Amazon CloudFront to serve the web application globally provides low-latency content delivery
upvoted 3 times
Question #346 Topic 1
A company has an aging network-attached storage (NAS) array in its data center. The NAS array presents SMB shares and NFS shares to client workstations. The company does not want to purchase a new NAS array. The company also does not want to incur the cost of renewing the NAS array’s support contract. Some of the data is accessed frequently, but much of the data is inactive.
A solutions architect needs to implement a solution that migrates the data to Amazon S3, uses S3 Lifecycle policies, and maintains the same look and feel for the client workstations. The solutions architect has identified AWS Storage Gateway as part of the solution.
Which type of storage gateway should the solutions architect provision to meet these requirements?
A. Volume Gateway
B. Tape Gateway
C. Amazon FSx File Gateway
D. Amazon S3 File Gateway
Community vote distribution
D (100%)
LuckyAro Highly Voted 4 months ago
Amazon S3 File Gateway provides on-premises applications with access to virtually unlimited cloud storage using NFS and SMB file interfaces. It seamlessly moves frequently accessed data to a low-latency cache while storing colder data in Amazon S3, using S3 Lifecycle policies to transition data between storage classes over time.
In this case, the company's aging NAS array can be replaced with an Amazon S3 File Gateway that presents the same NFS and SMB shares to the client workstations. The data can then be migrated to Amazon S3 and managed using S3 Lifecycle policies
upvoted 5 times
siyam008 Most Recent 3 months, 3 weeks ago
https://aws.amazon.com/blogs/storage/how-to-create-smb-file-shares-with-aws-storage-gateway-using-hyper-v/
upvoted 2 times
bdp123 4 months ago
https://aws.amazon.com/about-aws/whats-new/2018/06/aws-storage-gateway-adds-smb-support-to-store-objects-in-amazon-s3/
upvoted 2 times
everfly 4 months, 1 week ago
Amazon S3 File Gateway provides a file interface to objects stored in S3. It can be used for a file-based interface with S3, which allows the company to migrate their NAS array data to S3 while maintaining the same look and feel for client workstations. Amazon S3 File Gateway supports SMB and NFS protocols, which will allow clients to continue to access the data using these protocols. Additionally, Amazon S3 Lifecycle policies can be used to automate the movement of data to lower-cost storage tiers, reducing the storage cost of inactive data.
upvoted 3 times
Question #347 Topic 1
A company has an application that is running on Amazon EC2 instances. A solutions architect has standardized the company on a particular instance family and various instance sizes based on the current needs of the company.
The company wants to maximize cost savings for the application over the next 3 years. The company needs to be able to change the instance family and sizes in the next 6 months based on application popularity and usage.
Which solution will meet these requirements MOST cost-effectively?
A. Compute Savings Plan
B. EC2 Instance Savings Plan
C. Zonal Reserved Instances
D. Standard Reserved Instances
Community vote distribution
A (74%) B (23%)
AlmeroSenior Highly Voted 4 months ago
Read Carefully guys , They need to be able to change FAMILY , and although EC2 Savings has a higher discount , its clearly documented as not allowed >
EC2 Instance Savings Plans provide savings up to 72 percent off On-Demand, in exchange for a commitment to a specific instance family in a chosen AWS Region (for example, M5 in Virginia). These plans automatically apply to usage regardless of size (for example, m5.xlarge, m5.2xlarge, etc.), OS (for example, Windows, Linux, etc.), and tenancy (Host, Dedicated, Default) within the specified family in a Region.
upvoted 12 times
FFO 2 months, 1 week ago
Savings Plans are a flexible pricing model that offer low prices on Amazon EC2, AWS Lambda, and AWS Fargate usage, in exchange for a commitment to a consistent amount of usage (measured in $/hour) for a 1 or 3 year term. When you sign up for a Savings Plan, you will be charged the discounted Savings Plans price for your usage up to your commitment.
The company wants savings over the next 3 years but wants to change the instance type in 6 months. This invalidates A
upvoted 2 times
FFO 2 months, 1 week ago
Disregard! found more information:
We recommend Savings Plans (over Reserved Instances). Like Reserved Instances, Savings Plans offer lower prices (up to 72% savings compared to On-Demand Instance pricing). In addition, Savings Plans offer you the flexibility to change your usage as your needs evolve. For example, with Compute Savings Plans, lower prices will automatically apply when you change from C4 to C6g instances, shift a workload from EU (Ireland) to EU (London), or move a workload from Amazon EC2 to AWS Fargate or AWS Lambda. https://aws.amazon.com/ec2/pricing/reserved-instances/pricing/
upvoted 1 times
mattcl Most Recent 1 week, 3 days ago
Anser D: You can use Standard Reserved Instances when you know that you need a specific instance type.
upvoted 1 times
kruasan 1 month, 4 weeks ago
Savings Plans offer a flexible pricing model that provides savings on AWS usage. You can save up to 72 percent on your AWS compute workloads. Compute Savings Plans provide lower prices on Amazon EC2 instance usage regardless of instance family, size, OS, tenancy, or AWS Region. This also applies to AWS Fargate and AWS Lambda usage. SageMaker Savings Plans provide you with lower prices for your Amazon SageMaker instance usage, regardless of your instance family, size, component, or AWS Region.
https://docs.aws.amazon.com/savingsplans/latest/userguide/what-is-savings-plans.html
upvoted 2 times
kruasan 1 month, 4 weeks ago
With an EC2 Instance Savings Plan, you can change your instance size within the instance family (for example, from c5.xlarge to c5.2xlarge) or the operating system (for example, from Windows to Linux), or move from Dedicated tenancy to Default and continue to receive the discounted rate provided by your EC2 Instance Savings Plan.
https://docs.aws.amazon.com/savingsplans/latest/userguide/what-is-savings-plans.html
upvoted 1 times
kruasan 1 month, 4 weeks ago
The company needs to be able to change the instance family and sizes in the next 6 months based on application popularity and usage.
Therefore EC2 Instance Savings Plan prerequisites are not fulfilled
upvoted 1 times
SkyZeroZx 2 months ago
EC2 Instance Savings Plan
upvoted 1 times
lexotan 2 months, 1 week ago
Why not D. you can change istance type and classes
upvoted 1 times
bdp123 4 months ago
everfly 4 months ago
Compute Savings Plans provide the most flexibility and help to reduce your costs by up to 66%. These plans automatically apply to EC2 instance usage regardless of instance family, size, AZ, Region, OS or tenancy, and also apply to Fargate or Lambda usage.
EC2 Instance Savings Plans provide the lowest prices, offering savings up to 72% in exchange for commitment to usage of individual instance families in a Region
https://aws.amazon.com/savingsplans/compute-pricing/
upvoted 4 times
doodledreads 4 months, 1 week ago
Compute Savings plans are most flexible - lets you change the instance types vs EC2 Savings plans offer best savings.
upvoted 2 times
Yechi 4 months, 1 week ago
With an EC2 Instance Savings Plan, you can change your instance size within the instance family (for example, from c5.xlarge to c5.2xlarge) or the operating system (for example, from Windows to Linux), or move from Dedicated tenancy to Default and continue to receive the discounted rate provided by your EC2 Instance Savings Plan.
upvoted 3 times
everfly 4 months, 1 week ago
EC2 Instance Savings Plans provide the lowest prices, offering savings up to 72% in exchange for commitment to usage of individual instance families in a Region (e.g. M5 usage in N. Virginia). This automatically reduces your cost on the selected instance family in that region regardless of AZ, size, OS or tenancy. EC2 Instance Savings Plans give you the flexibility to change your usage between instances within a family in that region. For example, you can move from c5.xlarge running Windows to c5.2xlarge running Linux and automatically benefit from the Savings Plan prices. https://aws.amazon.com/savingsplans/compute-pricing/
upvoted 3 times
Question #348 Topic 1
A company collects data from a large number of participants who use wearable devices. The company stores the data in an Amazon DynamoDB table and uses applications to analyze the data. The data workload is constant and predictable. The company wants to stay at or below its
forecasted budget for DynamoDB.
Which solution will meet these requirements MOST cost-effectively?
A. Use provisioned mode and DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA). Reserve capacity for the forecasted workload.
B. Use provisioned mode. Specify the read capacity units (RCUs) and write capacity units (WCUs).
C. Use on-demand mode. Set the read capacity units (RCUs) and write capacity units (WCUs) high enough to accommodate changes in the workload.
D. Use on-demand mode. Specify the read capacity units (RCUs) and write capacity units (WCUs) with reserved capacity.
Community vote distribution
B (80%) A (20%)
MrAWSAssociate 6 days, 15 hours ago
MrAWSAssociate 6 days, 15 hours ago
Sorry, A will not work, since Reserved Capacity can only be used with DynamoDB Standard table class. So, B is right for this case.
upvoted 1 times
UNGMAN 3 months, 1 week ago
kayodea25 3 months, 2 weeks ago
Option C is the most cost-effective solution for this scenario. In on-demand mode, DynamoDB automatically scales up or down based on the current workload, so the company only pays for the capacity it uses. By setting the RCUs and WCUs high enough to accommodate changes in the workload, the company can ensure that it always has the necessary capacity without overprovisioning and incurring unnecessary costs. Since the workload is constant and predictable, using provisioned mode with reserved capacity (Options A and D) may result in paying for unused capacity during periods of low demand. Option B, using provisioned mode without reserved capacity, may result in throttling during periods of high demand if the provisioned capacity is not sufficient to handle the workload.
upvoted 2 times
Bofi 3 months, 1 week ago
Kayode olode..lol
upvoted 1 times
boxu03 3 months, 2 weeks ago
you forgot "The data workload is constant and predictable", should be B
upvoted 2 times
Steve_4542636 3 months, 3 weeks ago
"The data workload is constant and predictable." https://docs.aws.amazon.com/wellarchitected/latest/serverless-applications-lens/capacity.html
"With provisioned capacity you pay for the provision of read and write capacity units for your DynamoDB tables. Whereas with DynamoDB ondemand you pay per request for the data reads and writes that your application performs on your tables."
upvoted 1 times
Charly0710 3 months, 3 weeks ago
The data workload is constant and predictable, then, isn't on-demand mode.
DynamoDB Standard-IA is not necessary in this context
upvoted 1 times
Lonojack 4 months ago
The problem with (A) is: “Standard-Infrequent Access“. In the question, they say the company has to analyze the Data. That’s why the Correct answer is (B)
upvoted 3 times
bdp123 4 months ago
Lonojack 4 months ago
The problem with (A) is: “Standard-Infrequent Access“.
In the question, they say the company has to analyze the Data. Correct answer is (B)
upvoted 2 times
Samuel03 4 months, 1 week ago
As the numbers are already known
upvoted 2 times
everfly 4 months, 1 week ago
The data workload is constant and predictable.
upvoted 4 times
Question #349 Topic 1
A company stores confidential data in an Amazon Aurora PostgreSQL database in the ap-southeast-3 Region. The database is encrypted with an AWS Key Management Service (AWS KMS) customer managed key. The company was recently acquired and must securely share a backup of the database with the acquiring company’s AWS account in ap-southeast-3.
What should a solutions architect do to meet these requirements?
A. Create a database snapshot. Copy the snapshot to a new unencrypted snapshot. Share the new snapshot with the acquiring company’s AWS account.
B. Create a database snapshot. Add the acquiring company’s AWS account to the KMS key policy. Share the snapshot with the acquiring company’s AWS account.
C. Create a database snapshot that uses a different AWS managed KMS key. Add the acquiring company’s AWS account to the KMS key alias. Share the snapshot with the acquiring company's AWS account.
D. Create a database snapshot. Download the database snapshot. Upload the database snapshot to an Amazon S3 bucket. Update the S3 bucket policy to allow access from the acquiring company’s AWS account.
Community vote distribution
B (100%)
Abrar2022 2 weeks, 3 days ago
Create a database snapshot of the encrypted. Add the acquiring company’s AWS account to the KMS key policy. Share the snapshot with the acquiring company’s AWS account.
upvoted 1 times
Abrar2022 2 weeks, 3 days ago
A. - "So let me get this straight, with the current company the data is protected and encrypted. However, for the acquiring company the data is unencrypted? How is that fair?"
C - Wouldn't recommended this option because using a different AWS managed KMS key will not allow the acquiring company's AWS account to access the encrypted data.
D. - Don't risk it for a biscuit and get fired!!!! - by downloading the database snapshot and uploading it to an Amazon S3 bucket. This will increase the risk of data leakage or loss of confidentiality during the transfer process.
B - CORRECT
upvoted 1 times
SkyZeroZx 1 month, 3 weeks ago
To securely share a backup of the database with the acquiring company's AWS account in the same Region, a solutions architect should create a database snapshot, add the acquiring company's AWS account to the AWS KMS key policy, and share the snapshot with the acquiring company's AWS account.
Option A, creating an unencrypted snapshot, is not recommended as it will compromise the confidentiality of the data. Option C, creating a snapshot that uses a different AWS managed KMS key, does not provide any additional security and will unnecessarily complicate the solution. Option D, downloading the database snapshot and uploading it to an S3 bucket, is not secure as it can expose the data during transit.
Therefore, the correct option is B: Create a database snapshot. Add the acquiring company's AWS account to the KMS key policy. Share the snapshot with the acquiring company's AWS account.
upvoted 1 times
elearningtakai 3 months ago
Option B is the correct answer.
Option A is not recommended because copying the snapshot to a new unencrypted snapshot will compromise the confidentiality of the data. Option C is not recommended because using a different AWS managed KMS key will not allow the acquiring company's AWS account to access the encrypted data.
Option D is not recommended because downloading the database snapshot and uploading it to an Amazon S3 bucket will increase the risk of data leakage or loss of confidentiality during the transfer process.
upvoted 1 times
Steve_4542636 3 months, 3 weeks ago
https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying-external-accounts.html
upvoted 1 times
It is C, you have to create a new key. Read below
You can't share a snapshot that's encrypted with the default AWS KMS key. You must create a custom AWS KMS key instead. To share an encrypted Aurora DB cluster snapshot:
Create a custom AWS KMS key.
Add the target account to the custom AWS KMS key.
Create a copy of the DB cluster snapshot using the custom AWS KMS key. Then, share the newly copied snapshot with the target account.
Copy the shared DB cluster snapshot from the target account https://aws.amazon.com/premiumsupport/knowledge-center/aurora-share-encrypted-snapshot/
upvoted 1 times
Yes, as per the given information "The database is encrypted with an AWS Key Management Service (AWS KMS) customer managed key", it may not be the default AWS KMS key.
upvoted 1 times
Yes, can't share a snapshot that's encrypted with the default AWS KMS key.
But as per the given information "The database is encrypted with an AWS Key Management Service (AWS KMS) customer managed key", it may not be the default AWS KMS key.
upvoted 2 times
I agree with KZM. It is B.
There's no need to create another custom AWS KMS key. https://aws.amazon.com/premiumsupport/knowledge-center/aurora-share-encrypted-snapshot/ Give target account access to the custom AWS KMS key within the source account
Log in to the source account, and go to the AWS KMS console in the same Region as the DB cluster snapshot.
Select Customer-managed keys from the navigation pane.
Select your custom AWS KMS key (ALREADY CREATED)
From the Other AWS accounts section, select Add another AWS account, and then enter the AWS account number of your target account.
Then:
Copy and share the DB cluster snapshot
upvoted 2 times
I also thought straight away that it could be C, however, the questions mentions that the database is encrypted with an AWS KMS custom key already. So maybe the letter B could be right, since it already has a custom key, not the default KMS Key.
What do you think?
upvoted 3 times
It is B.
There's no need to create another custom AWS KMS key. https://aws.amazon.com/premiumsupport/knowledge-center/aurora-share-encrypted-snapshot/ Give target account access to the custom AWS KMS key within the source account
Log in to the source account, and go to the AWS KMS console in the same Region as the DB cluster snapshot.
Select Customer-managed keys from the navigation pane.
Select your custom AWS KMS key (ALREADY CREATED)
From the Other AWS accounts section, select Add another AWS account, and then enter the AWS account number of your target account.
Then:
Copy and share the DB cluster snapshot
upvoted 1 times
Is it bad that in answer B the acquiring company is using the same KMS key? Should a new KMS key not be used?
upvoted 2 times
Yes, you are right, read my comment above.
upvoted 1 times
https://aws.amazon.com/premiumsupport/knowledge-center/aurora-share-encrypted-snapshot/
upvoted 2 times
jennyka76 4 months, 1 week ago
ANSWER - B
upvoted 1 times
Question #350 Topic 1
A company uses a 100 GB Amazon RDS for Microsoft SQL Server Single-AZ DB instance in the us-east-1 Region to store customer transactions. The company needs high availability and automatic recovery for the DB instance.
The company must also run reports on the RDS database several times a year. The report process causes transactions to take longer than usual to post to the customers’ accounts. The company needs a solution that will improve the performance of the report process.
Which combination of steps will meet these requirements? (Choose two.)
Modify the DB instance from a Single-AZ DB instance to a Multi-AZ deployment.
Take a snapshot of the current DB instance. Restore the snapshot to a new RDS deployment in another Availability Zone.
Create a read replica of the DB instance in a different Availability Zone. Point all requests for reports to the read replica.
Migrate the database to RDS Custom.
Use RDS Proxy to limit reporting requests to the maintenance window.
Community vote distribution
AC (100%)
elearningtakai Highly Voted 3 months ago
A and C are the correct choices.
B. It will not help improve the performance of the report process.
Migrating to RDS Custom does not address the issue of high availability and automatic recovery.
RDS Proxy can help with scalability and high availability but it does not address the issue of performance for the report process. Limiting the reporting requests to the maintenance window will not provide the required availability and recovery for the DB instance.
upvoted 5 times
elearningtakai Most Recent 3 months ago
A and C.
upvoted 2 times
WherecanIstart 3 months, 1 week ago
KZM 4 months ago Options A+C upvoted 2 times
bdp123 4 months ago
https://medium.com/awesome-cloud/aws-difference-between-multi-az-and-read-replicas-in-amazon-rds-60fe848ef53a
upvoted 2 times
jennyka76 4 months, 1 week ago
ANSWER - A & C
upvoted 3 times
Question #351 Topic 1
A company is moving its data management application to AWS. The company wants to transition to an event-driven architecture. The architecture needs to be more distributed and to use serverless concepts while performing the different aspects of the workflow. The company also wants to minimize operational overhead.
Which solution will meet these requirements?
Build out the workflow in AWS Glue. Use AWS Glue to invoke AWS Lambda functions to process the workflow steps.
Build out the workflow in AWS Step Functions. Deploy the application on Amazon EC2 instances. Use Step Functions to invoke the workflow steps on the EC2 instances.
Build out the workflow in Amazon EventBridge. Use EventBridge to invoke AWS Lambda functions on a schedule to process the workflow steps.
Build out the workflow in AWS Step Functions. Use Step Functions to create a state machine. Use the state machine to invoke AWS Lambda functions to process the workflow steps.
Community vote distribution
D (80%) C (20%)
Lonojack Highly Voted 4 months ago
This is why I’m voting D…..QUESTION ASKED FOR IT TO: use serverless concepts while performing the different aspects of the workflow. Is option D utilizing Serverless concepts?
upvoted 6 times
TariqKipkemei Most Recent 1 month, 2 weeks ago
Step Functions is based on state machines and tasks. A state machine is a workflow. A task is a state in a workflow that represents a single unit of work that another AWS service performs. Each step in a workflow is a state.
Depending on your use case, you can have Step Functions call AWS services, such as Lambda, to perform tasks. https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html
upvoted 2 times
TariqKipkemei 1 month, 2 weeks ago
Answer is D.
Step Functions is based on state machines and tasks. A state machine is a workflow. A task is a state in a workflow that represents a single unit of work that another AWS service performs. Each step in a workflow is a state.
Depending on your use case, you can have Step Functions call AWS services, such as Lambda, to perform tasks. https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html
upvoted 1 times
Karlos99 3 months, 3 weeks ago
There are two main types of routers used in event-driven architectures: event buses and event topics. At AWS, we offer Amazon EventBridge to build event buses and Amazon Simple Notification Service (SNS) to build event topics. https://aws.amazon.com/event-driven-architecture/
upvoted 1 times
TungPham 4 months ago
Step 3: Create a State Machine
Use the Step Functions console to create a state machine that invokes the Lambda function that you created earlier in Step 1. https://docs.aws.amazon.com/step-functions/latest/dg/tutorial-creating-lambda-state-machine.html
In Step Functions, a workflow is called a state machine, which is a series of event-driven steps. Each step in a workflow is called a state.
upvoted 2 times
Bilalazure 4 months ago
geekgirl22 4 months ago
It is D. Cannot be C because C is "scheduled"
upvoted 4 times
Americo32 4 months ago
Vou de C, orientada a eventos
upvoted 2 times
MssP 3 months ago
It is true that an Event-driven is made with EventBridge but with a Lambda on schedule??? It is a mismatch, isn´t it?
upvoted 2 times
kraken21 2 months, 3 weeks ago
Tricky question huh!
upvoted 1 times
bdp123 4 months ago
AWS Step functions is serverless Visual workflows for distributed applications https://aws.amazon.com/step-functions/
upvoted 1 times
leoattf 4 months ago
Besides, "Visualize and develop resilient workflows for EVENT-DRIVEN architectures."
upvoted 1 times
tellmenowwwww 4 months ago
Could it be a C because it's event-driven architecture?
upvoted 3 times
SMAZ 4 months ago
Option D..
AWS Step functions are used for distributed applications
upvoted 2 times
Question #352 Topic 1
A company is designing the network for an online multi-player game. The game uses the UDP networking protocol and will be deployed in eight AWS Regions. The network architecture needs to minimize latency and packet loss to give end users a high-quality gaming experience.
Which solution will meet these requirements?
Setup a transit gateway in each Region. Create inter-Region peering attachments between each transit gateway.
Set up AWS Global Accelerator with UDP listeners and endpoint groups in each Region.
Set up Amazon CloudFront with UDP turned on. Configure an origin in each Region.
Set up a VPC peering mesh between each Region. Turn on UDP for each VPC.
Community vote distribution
B (100%)
lucdt4 1 month ago
AWS Global Accelerator = TCP/UDP minimize latency
upvoted 2 times
TariqKipkemei 1 month, 2 weeks ago
Connect to up to 10 regions within the AWS global network using the AWS Global Accelerator.
upvoted 1 times
OAdekunle 1 month, 3 weeks ago
General
Q: What is AWS Global Accelerator?
A: AWS Global Accelerator is a networking service that helps you improve the availability and performance of the applications that you offer to your global users. AWS Global Accelerator is easy to set up, configure, and manage. It provides static IP addresses that provide a fixed entry point to your applications and eliminate the complexity of managing specific IP addresses for different AWS Regions and Availability Zones. AWS Global Accelerator always routes user traffic to the optimal endpoint based on performance, reacting instantly to changes in application health, your user’s location, and policies that you configure. You can test the performance benefits from your location with a speed comparison tool. Like other AWS services, AWS Global Accelerator is a self-service, pay-per-use offering, requiring no long term commitments or minimum fees.
https://aws.amazon.com/global-accelerator/faqs/
upvoted 3 times
elearningtakai 3 months ago
Global Accelerator supports the User Datagram Protocol (UDP) and Transmission Control Protocol (TCP), making it an excellent choice for an online multi-player game using UDP networking protocol. By setting up Global Accelerator with UDP listeners and endpoint groups in each Region, the network architecture can minimize latency and packet loss, giving end users a high-quality gaming experience.
upvoted 3 times
Bofi 3 months, 3 weeks ago
AWS Global Accelerator is a service that improves the availability and performance of applications with local or global users. Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions. Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover. Both services integrate with AWS Shield for DDoS protection.
upvoted 1 times
K0nAn 4 months ago
Global Accelerator for UDP and TCP traffic
upvoted 1 times
bdp123 4 months ago
Neha999 4 months, 1 week ago
B
Global Accelerator for UDP traffic
upvoted 1 times
Question #353 Topic 1
A company hosts a three-tier web application on Amazon EC2 instances in a single Availability Zone. The web application uses a self-managed MySQL database that is hosted on an EC2 instance to store data in an Amazon Elastic Block Store (Amazon EBS) volume. The MySQL database currently uses a 1 TB Provisioned IOPS SSD (io2) EBS volume. The company expects traffic of 1,000 IOPS for both reads and writes at peak traffic.
The company wants to minimize any disruptions, stabilize performance, and reduce costs while retaining the capacity for double the IOPS. The company wants to move the database tier to a fully managed solution that is highly available and fault tolerant.
Which solution will meet these requirements MOST cost-effectively?
Use a Multi-AZ deployment of an Amazon RDS for MySQL DB instance with an io2 Block Express EBS volume.
Use a Multi-AZ deployment of an Amazon RDS for MySQL DB instance with a General Purpose SSD (gp2) EBS volume.
Use Amazon S3 Intelligent-Tiering access tiers.
Use two large EC2 instances to host the database in active-passive mode.
Community vote distribution
B (84%) A (16%)
AlmeroSenior Highly Voted 4 months ago
RDS does not support IO2 or IO2express . GP2 can do the required IOPS
RDS supported Storage > https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html GP2 max IOPS >
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/general-purpose.html#gp2-performance
upvoted 11 times
Abrar2022 Most Recent 2 weeks, 3 days ago
Simplified by Almero - thanks.
RDS does not support IO2 or IO2express . GP2 can do the required IOPS
upvoted 1 times
TariqKipkemei 1 month, 2 weeks ago
I tried on the portal and only gp3 and i01 are supported. This is 11 May 2023.
upvoted 3 times
ruqui 4 weeks ago
it doesn't matter whether or no io* is supported, using io2 is overkill, you only need 1K IOPS, B is the correct answer
upvoted 1 times
SimiTik 2 months, 1 week ago
A
Amazon RDS supports the use of Amazon EBS Provisioned IOPS (io2) volumes. When creating a new DB instance or modifying an existing one, you can select the io2 volume type and specify the amount of IOPS and storage capacity required. RDS also supports the newer io2 Block Express volumes, which can deliver even higher performance for mission-critical database workloads.
upvoted 2 times
TariqKipkemei 1 month, 2 weeks ago
Impossible. I just tried on the portal and only io1 and gp3 are supported.
upvoted 1 times
klayytech 3 months ago
he most cost-effective solution that meets the requirements is to use a Multi-AZ deployment of an Amazon RDS for MySQL DB instance with a General Purpose SSD (gp2) EBS volume. This solution will provide high availability and fault tolerance while minimizing disruptions and stabilizing performance. The gp2 EBS volume can handle up to 16,000 IOPS. You can also scale up to 64 TiB of storage.
Amazon RDS for MySQL provides automated backups, software patching, and automatic host replacement. It also provides Multi-AZ deployments
that automatically replicate data to a standby instance in another Availability Zone. This ensures that data is always available even in the event of a failure.
upvoted 1 times
test_devops_aws 3 months, 1 week ago
RDS does not support io2 !!!
upvoted 1 times
Maximus007 3 months, 2 weeks ago
B:gp3 would be the better option, but considering we have only gp2 option and such storage volume - gp2 will be the right choice
upvoted 2 times
I thought the answer here is A. But when I found the link from Amazon website; as per AWS:
Amazon RDS provides three storage types: General Purpose SSD (also known as gp2 and gp3), Provisioned IOPS SSD (also known as io1), and magnetic (also known as standard). They differ in performance characteristics and price, which means that you can tailor your storage performance and cost to the needs of your database workload. You can create MySQL, MariaDB, Oracle, and PostgreSQL RDS DB instances with up to 64 tebibytes (TiB) of storage. You can create SQL Server RDS DB instances with up to 16 TiB of storage. For this amount of storage, use the Provisioned IOPS SSD and General Purpose SSD storage types.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html
upvoted 1 times
Steve_4542636 3 months, 3 weeks ago
for DB instances between 1 TiB and 4 TiB, storage is striped across four Amazon EBS volumes providing burst performance of up to 12,000 IOPS.
from "https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html"
upvoted 1 times
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html
Amazon RDS provides three storage types: General Purpose SSD (also known as gp2 and gp3), Provisioned IOPS SSD (also known as io1), and magnetic (also known as standard)
B - MOST cost-effectively
upvoted 2 times
The baseline IOPS performance of gp2 volumes is 3 IOPS per GB, which means that a 1 TB gp2 volume will have a baseline performance of 3,000 IOPS. However, the volume can also burst up to 16,000 IOPS for short periods, but this burst performance is limited and may not be sustained for long durations.
So, I am more prefer option A.
upvoted 1 times
If a 1 TB gp3 EBS volume is used, the maximum available IOPS according to calculations is 3000. This means that the storage can support a requirement of 1000 IOPS, and even 2000 IOPS if the requirement is doubled.
I am confusing between choosing A or B.
upvoted 1 times
Option A is the correct answer. A Multi-AZ deployment provides high availability and fault tolerance by automatically replicating data to a standby instance in a different Availability Zone. This allows for seamless failover in the event of a primary instance failure. Using an io2 Block Express EBS volume provides the needed IOPS performance and capacity for the database. It is also designed for low latency and high durability, which makes it a good choice for a database tier.
upvoted 1 times
CapJackSparrow 3 months, 2 weeks ago
How will you select io2 when RDS only offers io1 magic?
upvoted 1 times
Correction - hit wrong answer button - meant 'B'
Amazon RDS provides three storage types: General Purpose SSD (also known as gp2 and gp3), Provisioned IOPS SSD (also known as io1) https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html
upvoted 1 times
Amazon RDS provides three storage types: General Purpose SSD (also known as gp2 and gp3), Provisioned IOPS SSD (also known as io1) https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html
upvoted 1 times
everfly 4 months, 1 week ago
https://aws.amazon.com/about-aws/whats-new/2021/07/aws-announces-general-availability-amazon-ebs-block-express-volumes/
upvoted 2 times
Question #354 Topic 1
A company hosts a serverless application on AWS. The application uses Amazon API Gateway, AWS Lambda, and an Amazon RDS for PostgreSQL database. The company notices an increase in application errors that result from database connection timeouts during times of peak traffic or
unpredictable traffic. The company needs a solution that reduces the application failures with the least amount of change to the code. What should a solutions architect do to meet these requirements?
Reduce the Lambda concurrency rate.
Enable RDS Proxy on the RDS DB instance.
Resize the RDS DB instance class to accept more connections.
Migrate the database to Amazon DynamoDB with on-demand scaling.
Community vote distribution
B (100%)
TariqKipkemei 1 month, 2 weeks ago
Many applications, including those built on modern serverless architectures, can have a large number of open connections to the database server and may open and close database connections at a high rate, exhausting database memory and compute resources. Amazon RDS Proxy allows applications to pool and share connections established with the database, improving database efficiency and application scalability. With RDS Proxy, failover times for Aurora and RDS databases are reduced by up to 66%.
https://aws.amazon.com/rds/proxy/
upvoted 2 times
elearningtakai 3 months ago
To reduce application failures resulting from database connection timeouts, the best solution is to enable RDS Proxy on the RDS DB instance
upvoted 1 times
WherecanIstart 3 months, 1 week ago
nder 4 months ago
RDS Proxy will pool connections, no code changes need to be made
upvoted 1 times
bdp123 4 months ago
Neha999 4 months, 1 week ago
B RDS Proxy https://aws.amazon.com/rds/proxy/
upvoted 2 times
Question #355 Topic 1
A company is migrating an old application to AWS. The application runs a batch job every hour and is CPU intensive. The batch job takes 15 minutes on average with an on-premises server. The server has 64 virtual CPU (vCPU) and 512 GiB of memory.
Which solution will run the batch job within 15 minutes with the LEAST operational overhead?
Use AWS Lambda with functional scaling.
Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate.
Use Amazon Lightsail with AWS Auto Scaling.
Use AWS Batch on Amazon EC2.
Community vote distribution
D (100%)
NolaHOla Highly Voted 4 months, 1 week ago
The amount of CPU and memory resources required by the batch job exceeds the capabilities of AWS Lambda and Amazon Lightsail with AWS Auto Scaling, which offer limited compute resources. AWS Fargate offers containerized application orchestration and scalable infrastructure, but may require additional operational overhead to configure and manage the environment. AWS Batch is a fully managed service that automatically provisions the required infrastructure for batch jobs, with options to use different instance types and launch modes.
Therefore, the solution that will run the batch job within 15 minutes with the LEAST operational overhead is D. Use AWS Batch on Amazon EC2. AWS Batch can handle all the operational aspects of job scheduling, instance management, and scaling while using Amazon EC2 injavascript:void(0)stances with the right amount of CPU and memory resources to meet the job's requirements.
upvoted 12 times
everfly Highly Voted 4 months, 1 week ago
AWS Batch is a fully-managed service that can launch and manage the compute resources needed to execute batch jobs. It can scale the compute environment based on the size and timing of the batch jobs.
upvoted 6 times
TariqKipkemei Most Recent 1 month, 2 weeks ago
JLII 3 months, 3 weeks ago
Not A because: "AWS Lambda now supports up to 10 GB of memory and 6 vCPU cores for Lambda Functions." https://aws.amazon.com/about-aws/whats-new/2020/12/aws-lambda-supports-10gb-memory-6-vcpu-cores-lambda-functions/ vs. "The server has 64 virtual CPU (vCPU) and 512 GiB of memory" in the question.
upvoted 4 times
geekgirl22 4 months ago
A is the answer. Lambda is known that has a limit of 15 minutes. So for as long as it says "within 15 minutes" that should be a clear indication it is Lambda
upvoted 1 times
nder 4 months ago
Wrong, the job takes "On average 15 minutes" and requires more cpu and ram than lambda can deal with. AWS Batch is correct in this scenario
upvoted 3 times
geekgirl22 4 months ago
read the rest of the question which gives the answer:
"Which solution will run the batch job within 15 minutes with the LEAST operational overhead?" Keyword "Within 15 minutes"
upvoted 1 times
Lonojack 4 months ago
What happens if it EXCEEDS the 15 min AVERAGE? Average = possibly can be more than 15min.
The safer bet would be option D: AWS Batch on EC2
upvoted 6 times
bdp123 4 months ago
Question #356 Topic 1
A company stores its data objects in Amazon S3 Standard storage. A solutions architect has found that 75% of the data is rarely accessed after 30 days. The company needs all the data to remain immediately accessible with the same high availability and resiliency, but the company wants to minimize storage costs.
Which storage solution will meet these requirements?
Move the data objects to S3 Glacier Deep Archive after 30 days.
Move the data objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days.
Move the data objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days.
Move the data objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) immediately.
Community vote distribution
B (100%)
Piccalo 2 months, 4 weeks ago
Highly available so One Zone IA is out the question
Glacier Deep archive isn't immediately accessible 12-48 hours B is the answer.
upvoted 3 times
elearningtakai 3 months ago
S3 Glacier Deep Archive is intended for data that is rarely accessed and can tolerate retrieval times measured in hours. Moving data to S3 One Zone-IA immediately would not meet the requirement of immediate accessibility with the same high availability and resiliency.
upvoted 1 times
KS2020 3 months, 1 week ago
The answer should be C.
S3 One Zone-IA is for data that is accessed less frequently but requires rapid access when needed. Unlike other S3 Storage Classes which store data in a minimum of three Availability Zones (AZs), S3 One Zone-IA stores data in a single AZ and costs 20% less than S3 Standard-IA.
https://aws.amazon.com/s3/storage-classes/#:~:text=S3%20One%20Zone%2DIA%20is,less%20than%20S3%20Standard%2DIA.
upvoted 1 times
shanwford 3 months ago
The Question emphasises to kepp same high availability class - S3 One Zone-IA doesnt support multiple Availability Zone data resilience model like S3 Standard-Infrequent Access.
upvoted 2 times
Lonojack 4 months ago
Needs immediate accessibility after 30days, IF they need to be accessed.
upvoted 4 times
bdp123 4 months ago
S3 Standard-Infrequent Access after 30 days
upvoted 2 times
NolaHOla 4 months, 1 week ago
B
Option B - Move the data objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days - will meet the requirements of keeping the data immediately accessible with high availability and resiliency, while minimizing storage costs. S3 Standard-IA is designed for infrequently accessed data, and it provides a lower storage cost than S3 Standard, while still offering the same low latency, high throughput, and high durability as S3 Standard.
upvoted 3 times
Question #357 Topic 1
A gaming company is moving its public scoreboard from a data center to the AWS Cloud. The company uses Amazon EC2 Windows Server
instances behind an Application Load Balancer to host its dynamic application. The company needs a highly available storage solution for the application. The application consists of static files and dynamic server-side code.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)
Store the static files on Amazon S3. Use Amazon CloudFront to cache objects at the edge.
Store the static files on Amazon S3. Use Amazon ElastiCache to cache objects at the edge.
Store the server-side code on Amazon Elastic File System (Amazon EFS). Mount the EFS volume on each EC2 instance to share the files.
Store the server-side code on Amazon FSx for Windows File Server. Mount the FSx for Windows File Server volume on each EC2 instance to share the files.
Store the server-side code on a General Purpose SSD (gp2) Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume on each EC2 instance to share the files.
Community vote distribution
AD (100%)
WherecanIstart 3 months, 1 week ago
Steve_4542636 3 months, 4 weeks ago
A because Elasticache, despite being ideal for leaderboards per Amazon, doesn't cache at edge locations. D because FSx has higher performance for low latency needs.
https://www.techtarget.com/searchaws/tip/Amazon-FSx-vs-EFS-Compare-the-AWS-file-services
"FSx is built for high performance and submillisecond latency using solid-state drive storage volumes. This design enables users to select storage capacity and latency independently. Thus, even a subterabyte file system can have 256 Mbps or higher throughput and support volumes up to 64 TB."
upvoted 3 times
Nel8 3 months, 2 weeks ago
Just to add, ElastiCache is use in front of AWS database.
upvoted 2 times
KZM 4 months ago
It is obvious that A and D.
upvoted 1 times
bdp123 4 months ago
both A and D seem correct
upvoted 1 times
NolaHOla 4 months, 1 week ago
A and D seems correct
upvoted 1 times
Question #358 Topic 1
A social media company runs its application on Amazon EC2 instances behind an Application Load Balancer (ALB). The ALB is the origin for an Amazon CloudFront distribution. The application has more than a billion images stored in an Amazon S3 bucket and processes thousands of
images each second. The company wants to resize the images dynamically and serve appropriate formats to clients. Which solution will meet these requirements with the LEAST operational overhead?
Install an external image management library on an EC2 instance. Use the image management library to process the images.
Create a CloudFront origin request policy. Use the policy to automatically resize images and to serve the appropriate format based on the User-Agent HTTP header in the request.
Use a Lambda@Edge function with an external image management library. Associate the Lambda@Edge function with the CloudFront behaviors that serve the images.
Create a CloudFront response headers policy. Use the policy to automatically resize images and to serve the appropriate format based on the User-Agent HTTP header in the request.
Community vote distribution
C (100%)
NolaHOla Highly Voted 4 months, 1 week ago
Use a Lambda@Edge function with an external image management library. Associate the Lambda@Edge function with the CloudFront behaviors that serve the images.
Using a Lambda@Edge function with an external image management library is the best solution to resize the images dynamically and serve appropriate formats to clients. Lambda@Edge is a serverless computing service that allows running custom code in response to CloudFront events, such as viewer requests and origin requests. By using a Lambda@Edge function, it's possible to process images on the fly and modify the CloudFront response before it's sent back to the client. Additionally, Lambda@Edge has built-in support for external libraries that can be used to process images. This approach will reduce operational overhead and scale automatically with traffic.
upvoted 8 times
bdp123 Most Recent 4 months ago
https://aws.amazon.com/cn/blogs/networking-and-content-delivery/resizing-images-with-amazon-cloudfront-lambdaedge-aws-cdn-blog/
upvoted 3 times
everfly 4 months, 1 week ago
https://aws.amazon.com/cn/blogs/networking-and-content-delivery/resizing-images-with-amazon-cloudfront-lambdaedge-aws-cdn-blog/
upvoted 2 times
Question #359 Topic 1
A hospital needs to store patient records in an Amazon S3 bucket. The hospital’s compliance team must ensure that all protected health information (PHI) is encrypted in transit and at rest. The compliance team must administer the encryption key for data at rest.
Which solution will meet these requirements?
Create a public SSL/TLS certificate in AWS Certificate Manager (ACM). Associate the certificate with Amazon S3. Configure default
encryption for each S3 bucket to use server-side encryption with AWS KMS keys (SSE-KMS). Assign the compliance team to manage the KMS keys.
Use the aws:SecureTransport condition on S3 bucket policies to allow only encrypted connections over HTTPS (TLS). Configure default encryption for each S3 bucket to use server-side encryption with S3 managed encryption keys (SSE-S3). Assign the compliance team to manage the SSE-S3 keys.
Use the aws:SecureTransport condition on S3 bucket policies to allow only encrypted connections over HTTPS (TLS). Configure default
encryption for each S3 bucket to use server-side encryption with AWS KMS keys (SSE-KMS). Assign the compliance team to manage the KMS keys.
Use the aws:SecureTransport condition on S3 bucket policies to allow only encrypted connections over HTTPS (TLS). Use Amazon Macie to protect the sensitive data that is stored in Amazon S3. Assign the compliance team to manage Macie.
Community vote distribution
C (79%) D (16%) 5%
NolaHOla Highly Voted 4 months, 1 week ago
Option C is correct because it allows the compliance team to manage the KMS keys used for server-side encryption, thereby providing the necessary control over the encryption keys. Additionally, the use of the "aws:SecureTransport" condition on the bucket policy ensures that all connections to the S3 bucket are encrypted in transit.
option B might be misleading but using SSE-S3, the encryption keys are managed by AWS and not by the compliance team
upvoted 10 times
Lonojack 4 months ago Perfect explanation. I Agree upvoted 2 times
Yadav_Sanjay Most Recent 1 month, 1 week ago
D - Can't be because - Amazon Macie is a data security service that uses machine learning (ML) and pattern matching to discover and help protect your sensitive data.
Macie discovers sensitive information, can help in protection but can't protect
upvoted 1 times
TariqKipkemei 1 month, 2 weeks ago
B can work if they do not want control over encryption keys.
upvoted 1 times
Russs99 3 months ago
Option A proposes creating a public SSL/TLS certificate in AWS Certificate Manager and associating it with Amazon S3. This step ensures that data is encrypted in transit. Then, the default encryption for each S3 bucket will be configured to use server-side encryption with AWS KMS keys (SSE-KMS), which will provide encryption at rest for the data stored in S3. In this solution, the compliance team will manage the KMS keys, ensuring that they control the encryption keys for data at rest.
upvoted 1 times
Shrestwt 2 months, 1 week ago
ACM cannot be integrated with Amazon S3 bucket directly.
upvoted 1 times
Bofi 3 months ago
Option C seems to be the correct answer, option A is also close but ACM cannot be integrated with Amazon S3 bucket directly, hence, u can not attached TLS to S3. You can only attached TLS certificate to ALB, API Gateway and CloudFront and maybe Global Accelerator but definitely NOT EC2 instance and S3 bucket
upvoted 1 times
CapJackSparrow 3 months, 2 weeks ago
D makes no sense.
upvoted 2 times
Dody 3 months, 3 weeks ago
Correct Answer is "C"
“D” is not correct because Amazon Macie securely stores your data at rest using AWS encryption solutions. Macie encrypts data, such as findings, using an AWS managed key from AWS Key Management Service (AWS KMS). However, in the question there is a requirement that the compliance team must administer the encryption key for data at rest.
https://docs.aws.amazon.com/macie/latest/user/data-protection.html
upvoted 2 times
cegama543 3 months, 3 weeks ago
Option C will meet the requirements. Explanation:
The compliance team needs to administer the encryption key for data at rest in order to ensure that protected health information (PHI) is encrypted in transit and at rest. Therefore, we need to use server-side encryption with AWS KMS keys (SSE-KMS). The default encryption for each S3 bucket can be configured to use SSE-KMS to ensure that all new objects in the bucket are encrypted with KMS keys.
Additionally, we can configure the S3 bucket policies to allow only encrypted connections over HTTPS (TLS) using the aws:SecureTransport condition. This ensures that the data is encrypted in transit.
upvoted 1 times
Karlos99 3 months, 3 weeks ago
We must provide encrypted in transit and at rest. Macie is needed to discover and recognize any PII or Protected Health Information. We already know that the hospital is working with the sensitive data ) so protect them witn KMS and SSL. Answer D is unnecessary
upvoted 1 times
Steve_4542636 3 months, 3 weeks ago
Macie does not encrypt the data like the question is asking https://docs.aws.amazon.com/macie/latest/user/what-is-macie.html
Also, SSE-S3 encryption is fully managed by AWS so the Compliance Team can't administer this.
upvoted 2 times
Abhineet9148232 3 months, 3 weeks ago
C [Correct]: Ensures Https only traffic (encrypted transit), Enables compliance team to govern encryption key.
D [Incorrect]: Misleading; PHI is required to be encrypted not discovered. Maice is a discovery service. (https://aws.amazon.com/macie/)
upvoted 4 times
Nel8 4 months ago
Correct answer should be D. "Use Amazon Macie to protect the sensitive data..."
As requirement says "The hospitals's compliance team must ensure that all protected health information (PHI) is encrypted in transit and at rest."
Macie protects personal record such as PHI. Macie provides you with an inventory of your S3 buckets, and automatically evaluates and monitors the buckets for security and access control. If Macie detects a potential issue with the security or privacy of your data, such as a bucket that becomes publicly accessible, Macie generates a finding for you to review and remediate as necessary.
upvoted 3 times
Drayen25 4 months ago Option C should be upvoted 2 times
Question #360 Topic 1
A company uses Amazon API Gateway to run a private gateway with two REST APIs in the same VPC. The BuyStock RESTful web service calls the CheckFunds RESTful web service to ensure that enough funds are available before a stock can be purchased. The company has noticed in the VPC flow logs that the BuyStock RESTful web service calls the CheckFunds RESTful web service over the internet instead of through the VPC. A
solutions architect must implement a solution so that the APIs communicate through the VPC. Which solution will meet these requirements with the FEWEST changes to the code?
Add an X-API-Key header in the HTTP header for authorization.
Use an interface endpoint.
Use a gateway endpoint.
Add an Amazon Simple Queue Service (Amazon SQS) queue between the two REST APIs.
Community vote distribution
B (85%) C (15%)
everfly Highly Voted 4 months, 1 week ago
an interface endpoint is a horizontally scaled, redundant VPC endpoint that provides private connectivity to a service. It is an elastic network interface with a private IP address that serves as an entry point for traffic destined to the AWS service. Interface endpoints are used to connect VPCs with AWS services
upvoted 10 times
envest Most Recent 1 month ago
Answer B (from abylead)
With API GW, you can create multiple prv REST APIs, only accessible with an interface VPC endpt. To allow/ deny simple or cross acc access to your API from selected VPCs & its endpts, you use resource plcys. In addition, you can also use DX for a connection between onprem network to VPC or your prv API.
API GW to VPC: https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-private-apis.html
Less correct & incorrect (infeasible & inadequate) answers:
A)X-API-Key in HTTP header for authorization needs auto-process fcts & changes: inadequate.
VPC GW endpts for S3 or DynamDB aren’t for RESTful svcs: infeasible.
SQS que between 2 REST APIs needs endpts & some changes: inadequate.
upvoted 1 times
lucdt4 1 month ago
C. Use a gateway endpoint is wrong because gateway endpoints only support for S3 and dynamoDB, so B is correct
upvoted 1 times
aqmdla2002 1 month, 1 week ago
I select C because it's the solution with the " FEWEST changes to the code"
upvoted 1 times
TariqKipkemei 1 month, 2 weeks ago
An interface endpoint is powered by PrivateLink, and uses an elastic network interface (ENI) as an entry point for traffic destined to the service
upvoted 1 times
kprakashbehera 3 months, 2 weeks ago
BBBBBB
upvoted 1 times
siyam008 3 months, 3 weeks ago
https://www.linkedin.com/pulse/aws-interface-endpoint-vs-gateway-alex-chang
upvoted 1 times
siyam008 3 months, 3 weeks ago
Correct answer is B. Incorrectly selected C
upvoted 1 times
DASBOL 4 months ago
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-private-apis.html
upvoted 4 times
Sherif_Abbas 4 months ago
The only time where an Interface Endpoint may be preferable (for S3 or DynamoDB) over a Gateway Endpoint is if you require access from onpremises, for example you want private access from your on-premise data center
upvoted 2 times
Steve_4542636 3 months, 3 weeks ago
The RESTful services is neither an S3 or DynamDB service, so a VPC Gateway endpoint isn't available here.
upvoted 3 times
bdp123 4 months ago
fewest changes to code and below link:
https://gkzz.medium.com/what-is-the-differences-between-vpc-endpoint-gateway-endpoint-ae97bfab97d8
upvoted 2 times
PoisonBlack 1 month, 3 weeks ago
This really helped me understand the difference between the two. Thx
upvoted 1 times
KAUS2 4 months ago
Agreed B
upvoted 2 times
AlmeroSenior 4 months ago
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-private-apis.html - Interface EP
upvoted 3 times
Question #361 Topic 1
A company hosts a multiplayer gaming application on AWS. The company wants the application to read data with sub-millisecond latency and run one-time queries on historical data.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon RDS for data that is frequently accessed. Run a periodic custom script to export the data to an Amazon S3 bucket.
B. Store the data directly in an Amazon S3 bucket. Implement an S3 Lifecycle policy to move older data to S3 Glacier Deep Archive for longterm storage. Run one-time queries on the data in Amazon S3 by using Amazon Athena.
C. Use Amazon DynamoDB with DynamoDB Accelerator (DAX) for data that is frequently accessed. Export the data to an Amazon S3 bucket by using DynamoDB table export. Run one-time queries on the data in Amazon S3 by using Amazon Athena.
D. Use Amazon DynamoDB for data that is frequently accessed. Turn on streaming to Amazon Kinesis Data Streams. Use Amazon Kinesis Data Firehose to read the data from Kinesis Data Streams. Store the records in an Amazon S3 bucket.
Community vote distribution
C (100%)
marufxplorer 1 week ago
C
Amazon DynamoDB with DynamoDB Accelerator (DAX): DynamoDB is a fully managed NoSQL database service provided by AWS. It is designed for low-latency access to frequently accessed data. DynamoDB Accelerator (DAX) is an in-memory cache for DynamoDB that can significantly reduce read latency, making it suitable for achieving sub-millisecond read times.
upvoted 1 times
lucdt4 1 month ago
C is correct
A don't meets a requirement (LEAST operational overhead) because use script B: Not regarding to require
D: Kinesis for near-real-time (Not for read)
-> C is correct
upvoted 2 times
lexotan 2 months, 1 week ago
would be nice to have an explanation on why examtopic selects its answers.
upvoted 3 times
DagsH 3 months, 1 week ago
Agreed C will be best because of DynamoDB DAX
upvoted 1 times
BeeKayEnn 3 months, 1 week ago
Option C will be the best fit.
As they would like to retrieve the data with sub-millisecond, DynamoDB with DAX is the answer.
DynamoDB supports some of the world's largest scale applications by providing consistent, single-digit millisecond response times at any scale. You can build applications with virtually unlimited throughput and storage.
upvoted 2 times
Grace83 3 months, 1 week ago
C is the correct answer
upvoted 1 times
KAUS2 3 months, 2 weeks ago
Option C is the right one. The questions clearly states "sub-millisecond latency "
upvoted 2 times
smgsi 3 months, 2 weeks ago
https://aws.amazon.com/dynamodb/dax/?nc1=h_ls
upvoted 3 times
taehyeki 3 months, 2 weeks ago
ACasper 3 months, 2 weeks ago Answer is C for Submillisecond upvoted 3 times
Question #362 Topic 1
A company uses a payment processing system that requires messages for a particular payment ID to be received in the same order that they were sent. Otherwise, the payments might be processed incorrectly.
Which actions should a solutions architect take to meet this requirement? (Choose two.)
Write the messages to an Amazon DynamoDB table with the payment ID as the partition key.
Write the messages to an Amazon Kinesis data stream with the payment ID as the partition key.
Write the messages to an Amazon ElastiCache for Memcached cluster with the payment ID as the key.
Write the messages to an Amazon Simple Queue Service (Amazon SQS) queue. Set the message attribute to use the payment ID.
Write the messages to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Set the message group to use the payment ID.
Community vote distribution
BE (63%) AE (37%)
Ashkan_10 Highly Voted 2 months, 3 weeks ago
Option B is preferred over A because Amazon Kinesis Data Streams inherently maintain the order of records within a shard, which is crucial for the given requirement of preserving the order of messages for a particular payment ID. When you use the payment ID as the partition key, all messages for that payment ID will be sent to the same shard, ensuring that the order of messages is maintained.
On the other hand, Amazon DynamoDB is a NoSQL database service that provides fast and predictable performance with seamless scalability. While it can store data with partition keys, it does not guarantee the order of records within a partition, which is essential for the given use case. Hence, using Kinesis Data Streams is more suitable for this requirement.
As DynamoDB does not keep the order, I think BE is the correct answer here.
upvoted 7 times
omoakin Most Recent 1 month ago
AAAAAAAAA EEEEEEEEEEEEEE
upvoted 1 times
Konb 1 month ago
IF the question would be "Choose all the solutions that fulfill these requirements" I would chosen BE.
But it is:
"Which actions should a solutions architect take to meet this requirement? "
For this reason I chose AE, because we don't need both Kinesis AND SQS for this solution. Both choices complement to order processing: order stored in DB, work item goes to the queue.
upvoted 2 times
luisgu 1 month, 2 weeks ago
E --> no doubt
B --> see https://docs.aws.amazon.com/streams/latest/dev/key-concepts.html
upvoted 1 times
kruasan 1 month, 4 weeks ago
SQS FIFO queues guarantee that messages are received in the exact order they are sent. Using the payment ID as the message group ensures all messages for a payment ID are received sequentially.
Kinesis data streams can also enforce ordering on a per partition key basis. Using the payment ID as the partition key will ensure strict ordering of messages for each payment ID.
upvoted 2 times
kruasan 1 month, 4 weeks ago
The other options do not guarantee message ordering. DynamoDB and ElastiCache are not message queues. SQS standard queues deliver messages in approximate order only.
upvoted 2 times
mrgeee 2 months ago
nosense 2 months ago
Option A, writing the messages to an Amazon DynamoDB table, would not necessarily preserve the order of messages for a particular payment ID
upvoted 1 times
MssP 3 months ago
I don´t unsderstand A, How you can guaratee the order with DynamoDB?? The order is guarantee with SQS FIFO and Kinesis Data Stream in 1 shard...
upvoted 4 times
Grace83 3 months, 1 week ago
AE is the answer
upvoted 2 times
XXXman 3 months, 2 weeks ago
dynamodb or kinesis data stream which one in order?
upvoted 1 times
Karlos99 3 months, 2 weeks ago
kprakashbehera 3 months, 2 weeks ago
Ans - AE
Kinessis and elastic cache are not required in this case.
upvoted 2 times
taehyeki 3 months, 2 weeks ago
Question #363 Topic 1
A company is building a game system that needs to send unique events to separate leaderboard, matchmaking, and authentication services concurrently. The company needs an AWS event-driven system that guarantees the order of the events.
Which solution will meet these requirements?
Amazon EventBridge event bus
Amazon Simple Notification Service (Amazon SNS) FIFO topics
Amazon Simple Notification Service (Amazon SNS) standard topics
Amazon Simple Queue Service (Amazon SQS) FIFO queues
Community vote distribution
B (59%) D (28%) 13%
cra2yk Highly Voted 3 months, 2 weeks ago
Given B by chatgpt:
The solution that meets the requirements of sending unique events to separate services concurrently and guaranteeing the order of events is option B, Amazon Simple Notification Service (Amazon SNS) FIFO topics.
Amazon SNS FIFO topics ensure that messages are processed in the order in which they are received. This makes them an ideal choice for situations where the order of events is important. Additionally, Amazon SNS allows messages to be sent to multiple endpoints, which meets the requirement of sending events to separate services concurrently.
Amazon EventBridge event bus can also be used for sending events, but it does not guarantee the order of events. Amazon Simple Notification Service (Amazon SNS) standard topics do not guarantee the order of messages.
Amazon Simple Queue Service (Amazon SQS) FIFO queues ensure that messages are processed in the order in which they are received, but they are designed for message queuing, not publishing.
upvoted 6 times
omoakin 1 month ago
Answer is D B is just for a message but cnt do orderliness.
I went to check Chatgpt she did not choose b i dnt know which one you subscribed to..or maybe its free. LOL her answer is D
upvoted 1 times
nw47 3 months, 1 week ago
ChatGPT also give A:
The requirement of maintaining the order of events rules out the use of Amazon SNS standard topics as they do not provide any ordering guarantees.
Amazon SNS FIFO topics offer message ordering but do not support concurrent delivery to multiple subscribers, so this option is also not a suitable choice.
Amazon SQS FIFO queues provide both ordering guarantees and support concurrent delivery to multiple subscribers. However, the use of a queue adds additional latency, and the ordering guarantee may not be required in this scenario.
The best option for this use case is Amazon EventBridge event bus. It allows multiple targets to subscribe to an event bus and receive the same event simultaneously, meeting the requirement of concurrent delivery to multiple subscribers. Additionally, EventBridge provides ordering guarantees within an event bus, ensuring that events are processed in the order they are received.
upvoted 1 times
jayce5 Most Recent 3 weeks, 1 day ago
It should be the fan-out pattern, and the pattern starts with Amazon SNS FIFO for the orders.
upvoted 1 times
danielklein09 3 weeks, 6 days ago
Answer is D. You are so lazy because instead of searching in documentation or your notes, you are asking ChatGPT. Do you really think you will take this exam ? Hint: ask ChatGPT
upvoted 1 times
lucdt4 1 month ago
D is correct (SQS FIFO)
Because B can't send event concurrently though it can send in the order of the events
upvoted 1 times
TariqKipkemei 1 month, 2 weeks ago
Amazon SNS is a highly available and durable publish-subscribe messaging service that allows applications to send messages to multiple subscribers through a topic. SNS FIFO topics are designed to ensure that messages are delivered in the order in which they are sent. This makes them ideal for situations where message order is important, such as in the case of the company's game system.
Option A, Amazon EventBridge event bus, is a serverless event bus service that makes it easy to build event-driven applications. While it supports ordering of events, it does not provide guarantees on the order of delivery.
upvoted 3 times
I don't honestly / can't understand why people go to ChapGPT to ask for the answers if I recall correctly they only consolidated their DB until
2021...
upvoted 4 times
rushi0611 1 month, 3 weeks ago
Option B:
send unique events to separate leaderboard, matchmaking, and authentication services concurrently. Concurrently= fan out pattern. Only SQS cannot do a fan out SQS will be consumer for SNS FIFO.
upvoted 1 times
BBBBBBB
upvoted 1 times
Guys, gotta question here... can sqs perform fan out by itself without sns? Here's what our beloved AI said:
AWS SQS (Simple Queue Service) can perform fan-out by itself using its native functionality, without the need for SNS (Simple Notification Service).
having that answer... would D be an option?
upvoted 2 times
D for me, and ChatGPT
upvoted 1 times
I think it should be D. Because in the question I saw nothing regarding subscribe which leads to SNS.
upvoted 1 times
Separate leader boards -> fan out pattern.
upvoted 1 times
maver144 2 months, 3 weeks ago
Vague question. Its either SNS FIFO or SQS FIFO. Consider that SNS FIFO can only have SQS FIFO as subscriber. You can't emmit events to other sources like with standard SNS.
upvoted 3 times
kraken21 2 months, 3 weeks ago
I think SNS FIFO FanOut/FIFO should be a good choice here. https://docs.aws.amazon.com/sns/latest/dg/fifo-example-use-case.html
upvoted 1 times
Since the questions specifically mentions separate consumer services. SNS Topics would need to be used to ensure ordering as well as filtering on subscriptions.
upvoted 1 times
SNS Ordering – You configure a message group by including a message group ID when publishing a message to a FIFO topic. For each message group ID, all messages are sent and delivered in order of their arrival.
upvoted 1 times
ashish0826 3 months ago
ChatGPT game me D
The requirement for ordering events rules out options B and C, as neither Amazon SNS standard nor Amazon SNS FIFO topics guarantee message order. Option A, Amazon EventBridge, supports event ordering and is capable of routing events to multiple targets concurrently. However, EventBridge is designed for processing events that can trigger AWS Lambda functions or other targets, and it may not be the best choice for sending events to third-party services.
Therefore, the best option for this scenario is D, Amazon Simple Queue Service (Amazon SQS) FIFO queues. SQS FIFO queues guarantee the order of messages and support multiple concurrent consumers. Each target service can have its own SQS FIFO queue, and the game system can send events to all the queues simultaneously to ensure that each service receives the correct sequence of events.
upvoted 4 times
Question #364 Topic 1
A hospital is designing a new application that gathers symptoms from patients. The hospital has decided to use Amazon Simple Queue Service (Amazon SQS) and Amazon Simple Notification Service (Amazon SNS) in the architecture.
A solutions architect is reviewing the infrastructure design. Data must be encrypted at rest and in transit. Only authorized personnel of the hospital should be able to access the data.
Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)
Turn on server-side encryption on the SQS components. Update the default key policy to restrict key usage to a set of authorized principals.
Turn on server-side encryption on the SNS components by using an AWS Key Management Service (AWS KMS) customer managed key. Apply a key policy to restrict key usage to a set of authorized principals.
Turn on encryption on the SNS components. Update the default key policy to restrict key usage to a set of authorized principals. Set a condition in the topic policy to allow only encrypted connections over TLS.
Turn on server-side encryption on the SQS components by using an AWS Key Management Service (AWS KMS) customer managed key. Apply a key policy to restrict key usage to a set of authorized principals. Set a condition in the queue policy to allow only encrypted connections over TLS.
Turn on server-side encryption on the SQS components by using an AWS Key Management Service (AWS KMS) customer managed key. Apply an IAM policy to restrict key usage to a set of authorized principals. Set a condition in the queue policy to allow only encrypted connections over TLS.
Community vote distribution
BD (67%) CD (17%) BE (17%)
fkie4 Highly Voted 3 months, 2 weeks ago
read this:
https://docs.aws.amazon.com/sns/latest/dg/sns-server-side-encryption.html https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-server-side-encryption.html
upvoted 8 times
TariqKipkemei Most Recent 1 month, 1 week ago
Its only options C and D that covers encryption on transit, encryption at rest and a restriction policy.
upvoted 1 times
Lalo 2 weeks, 6 days ago
Answer is BD
SNS: AWS KMS, key policy, SQS: AWS KMS, Key policy
upvoted 1 times
luisgu 1 month, 2 weeks ago
"IAM policies you can't specify the principal in an identity-based policy because it applies to the user or role to which it is attached" reference: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/security_iam_service-with-iam.html
that excludes E
upvoted 1 times
imvb88 2 months, 1 week ago
Encryption at transit = use SSL/TLS -> rule out A,B
Encryption at rest = encryption on components -> keep C, D, E KMS always need a key policy, IAM is optional -> E out
-> C, D left, one for SNS, one for SQS. TLS: checked, encryption on components: checked
upvoted 2 times
Lalo 2 weeks, 6 days ago
Answer is BD
SNS: AWS KMS, key policy, SQS: AWS KMS, Key policy
upvoted 1 times
imvb88 2 months, 1 week ago
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-data-encryption.html
You can protect data in transit using Secure Sockets Layer (SSL) or client-side encryption. You can protect data at rest by requesting Amazon SQS to encrypt your messages before saving them to disk in its data centers and then decrypt them when the messages are received.
https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html
A key policy is a resource policy for an AWS KMS key. Key policies are the primary way to control access to KMS keys. Every KMS key must have exactly one key policy. The statements in the key policy determine who has permission to use the KMS key and how they can use it. You can also use IAM policies and grants to control access to the KMS key, but every KMS key must have a key policy.
upvoted 1 times
MarkGerwich 3 months, 1 week ago
CD
B does not include encryption in transit.
upvoted 3 times
MssP 3 months ago
in transit is included in D. With C, not include encrytion at rest Server-side will include it.
upvoted 1 times
Bofi 3 months ago
That was my objection toward option B. CD cover both encryption at Rest and Server-Side_Encryption
upvoted 1 times
Maximus007 3 months, 2 weeks ago ChatGPT returned AD as a correct answer) upvoted 1 times
cegama543 3 months, 2 weeks ago
B: To encrypt data at rest, we can use a customer-managed key stored in AWS KMS to encrypt the SNS components.
E: To restrict access to the data and allow only authorized personnel to access the data, we can apply an IAM policy to restrict key usage to a set of authorized principals. We can also set a condition in the queue policy to allow only encrypted connections over TLS to encrypt data in transit.
upvoted 2 times
Karlos99 3 months, 2 weeks ago
For a customer managed KMS key, you must configure the key policy to add permissions for each queue producer and consumer. https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-key-management.html
upvoted 3 times
taehyeki 3 months, 2 weeks ago
bebebe
upvoted 1 times
taehyeki 3 months, 2 weeks ago
bdbdbdbd
All KMS keys must have a key policy. IAM policies are optional.
upvoted 5 times
Question #365 Topic 1
A company runs a web application that is backed by Amazon RDS. A new database administrator caused data loss by accidentally editing
information in a database table. To help recover from this type of incident, the company wants the ability to restore the database to its state from 5 minutes before any change within the last 30 days.
Which feature should the solutions architect include in the design to meet this requirement?
Read replicas
Manual snapshots
Automated backups
Multi-AZ deployments
Community vote distribution
C (100%)
elearningtakai 3 months ago
Option C, Automated backups, will meet the requirement. Amazon RDS allows you to automatically create backups of your DB instance. Automated backups enable point-in-time recovery (PITR) for your DB instance down to a specific second within the retention period, which can be up to 35 days. By setting the retention period to 30 days, the company can restore the database to its state from up to 5 minutes before any change within the last 30 days.
upvoted 2 times
joechen2023 1 week, 4 days ago
I selected C as well, but still don't know how the automatic backup could have a copy 5 minutes before any change. AWS doc states "Automated backups occur daily during the preferred backup window. " https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html.
I think the answer maybe A, as read replica will be kept sync and then restore from the read replica. could an expert help?
upvoted 1 times
gold4otas 3 months ago
C: Automated Backups
https://aws.amazon.com/rds/features/backup/
upvoted 2 times
WherecanIstart 3 months ago
taehyeki 3 months, 2 weeks ago
Question #366 Topic 1
A company’s web application consists of an Amazon API Gateway API in front of an AWS Lambda function and an Amazon DynamoDB database.
The Lambda function handles the business logic, and the DynamoDB table hosts the data. The application uses Amazon Cognito user pools to identify the individual users of the application. A solutions architect needs to update the application so that only users who have a subscription can access premium content.
Which solution will meet this requirement with the LEAST operational overhead?
Enable API caching and throttling on the API Gateway API.
Set up AWS WAF on the API Gateway API. Create a rule to filter users who have a subscription.
Apply fine-grained IAM permissions to the premium content in the DynamoDB table.
Implement API usage plans and API keys to limit the access of users who do not have a subscription.
Community vote distribution
D (92%) 8%
marufxplorer 1 week ago
D
Option D involves implementing API usage plans and API keys. By associating specific API keys with users who have a valid subscription, you can control access to the premium content.
upvoted 1 times
kruasan 1 month, 4 weeks ago
A. This would not actually limit access based on subscriptions. It helps optimize and control API usage, but does not address the core requirement.
B. This could work by checking user subscription status in the WAF rule, but would require ongoing management of WAF and increases operational overhead.
C. This is a good approach, using IAM permissions to control DynamoDB access at a granular level based on subscriptions. However, it would require managing IAM permissions which adds some operational overhead.
D. This option uses API Gateway mechanisms to limit API access based on subscription status. It would require the least amount of ongoing management and changes, minimizing operational overhead. API keys could be easily revoked/changed as subscription status changes.
upvoted 3 times
imvb88 2 months, 1 week ago
CD both possible but D is more suitable since it mentioned in https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html
A,B not relevant.
upvoted 1 times
elearningtakai 3 months ago
The solution that will meet the requirement with the least operational overhead is to implement API Gateway usage plans and API keys to limit access to premium content for users who do not have a subscription.
Option A is incorrect because API caching and throttling are not designed for authentication or authorization purposes, and it does not provide access control.
Option B is incorrect because although AWS WAF is a useful tool to protect web applications from common web exploits, it is not designed for authorization purposes, and it might require additional configuration, which increases the operational overhead.
Option C is incorrect because although IAM permissions can restrict access to data stored in a DynamoDB table, it does not provide a mechanism for limiting access to specific content based on the user subscription. Moreover, it might require a significant amount of additional IAM permissions configuration, which increases the operational overhead.
upvoted 3 times
klayytech 3 months ago
To meet the requirement with the least operational overhead, you can implement API usage plans and API keys to limit the access of users who do not have a subscription. This way, you can control access to your API Gateway APIs by requiring clients to submit valid API keys with requests. You can associate usage plans with API keys to configure throttling and quota limits on individual client accounts.
upvoted 2 times
techhb 3 months, 2 weeks ago
answer is D ,if looking for least overhead https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html C will achieve it but operational overhead is high.
upvoted 2 times
quentin17 3 months, 2 weeks ago
Both C&D are valid solution According to ChatGPT:
"Applying fine-grained IAM permissions to the premium content in the DynamoDB table is a valid approach, but it requires more effort in managing IAM policies and roles for each user, making it more complex and adding operational overhead."
upvoted 1 times
Karlos99 3 months, 2 weeks ago
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html
upvoted 2 times
taehyeki 3 months, 2 weeks ago
Question #367 Topic 1
A company is using Amazon Route 53 latency-based routing to route requests to its UDP-based application for users around the world. The application is hosted on redundant servers in the company's on-premises data centers in the United States, Asia, and Europe. The company’s
compliance requirements state that the application must be hosted on premises. The company wants to improve the performance and availability of the application.
What should a solutions architect do to meet these requirements?
A. Configure three Network Load Balancers (NLBs) in the three AWS Regions to address the on-premises endpoints. Create an accelerator by using AWS Global Accelerator, and register the NLBs as its endpoints. Provide access to the application by using a CNAME that points to the accelerator DNS.
B. Configure three Application Load Balancers (ALBs) in the three AWS Regions to address the on-premises endpoints. Create an accelerator by using AWS Global Accelerator, and register the ALBs as its endpoints. Provide access to the application by using a CNAME that points to the accelerator DNS.
C. Configure three Network Load Balancers (NLBs) in the three AWS Regions to address the on-premises endpoints. In Route 53, create a latency-based record that points to the three NLBs, and use it as an origin for an Amazon CloudFront distribution. Provide access to the application by using a CNAME that points to the CloudFront DNS.
D. Configure three Application Load Balancers (ALBs) in the three AWS Regions to address the on-premises endpoints. In Route 53, create a latency-based record that points to the three ALBs, and use it as an origin for an Amazon CloudFront distribution. Provide access to the
application by using a CNAME that points to the CloudFront DNS.
Community vote distribution
A (100%)
lucdt4 1 month ago
C - D: Cloudfront don't support UDP/TCP B: Global accelerator don't support ALB A is correct
upvoted 2 times
SkyZeroZx 2 months ago
UDP = NBL
UDP = GLOBAL ACCELERATOR
UPD NOT WORKING WITH CLOUDFRONT ANS IS A
upvoted 3 times
MssP 3 months ago
upvoted 1 times
Grace83 3 months, 1 week ago
Why is C not correct - does anyone know?
upvoted 2 times
Shrestwt 2 months, 1 week ago
Latency based routing is already using in the application, so AWS global network will optimize the path from users to applications.
upvoted 1 times
MssP 3 months ago
It could be valid but I think A is better. Uses the AWS global network to optimize the path from users to applications, improving the performance of TCP and UDP traffic
upvoted 1 times
FourOfAKind 3 months, 2 weeks ago
UDP == NLB
Must be hosted on-premises != CloudFront
upvoted 3 times
imvb88 2 months, 1 week ago
actually CloudFront's origin can be on-premises. Source: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistS3AndCustomOrigins.html#concept_CustomOrigin
"A custom origin is an HTTP server, for example, a web server. The HTTP server can be an Amazon EC2 instance or an HTTP server that you host somewhere else. "
upvoted 1 times
taehyeki 3 months, 2 weeks ago
Question #368 Topic 1
A solutions architect wants all new users to have specific complexity requirements and mandatory rotation periods for IAM user passwords. What should the solutions architect do to accomplish this?
A. Set an overall password policy for the entire AWS account.
B. Set a password policy for each IAM user in the AWS account.
C. Use third-party vendor software to set password requirements.
D. Attach an Amazon CloudWatch rule to the Create_newuser event to set the password with the appropriate requirements.
Community vote distribution
A (100%)
klayytech 3 months ago
To accomplish this, the solutions architect should set an overall password policy for the entire AWS account. This policy will apply to all IAM users in the account, including new users.
upvoted 2 times
WherecanIstart 3 months, 1 week ago
Set overall password policy ...
upvoted 1 times
kampatra 3 months, 2 weeks ago
taehyeki 3 months, 2 weeks ago
Question #369 Topic 1
A company has migrated an application to Amazon EC2 Linux instances. One of these EC2 instances runs several 1-hour tasks on a schedule. These tasks were written by different teams and have no common programming language. The company is concerned about performance and scalability while these tasks run on a single instance. A solutions architect needs to implement a solution to resolve these concerns.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS Batch to run the tasks as jobs. Schedule the jobs by using Amazon EventBridge (Amazon CloudWatch Events).
B. Convert the EC2 instance to a container. Use AWS App Runner to create the container on demand to run the tasks as jobs.
C. Copy the tasks into AWS Lambda functions. Schedule the Lambda functions by using Amazon EventBridge (Amazon CloudWatch Events).
D. Create an Amazon Machine Image (AMI) of the EC2 instance that runs the tasks. Create an Auto Scaling group with the AMI to run multiple copies of the instance.
Community vote distribution
A (61%) C (26%) 10%
fkie4 Highly Voted 3 months, 2 weeks ago
question said "These tasks were written by different teams and have no common programming language", and key word "scalable". Only Lambda can fulfil these. Lambda can be done in different programming languages, and it is scalable
upvoted 6 times
FourOfAKind 3 months, 2 weeks ago
But the question states "several 1-hour tasks on a schedule", and the maximum runtime for Lambda is 15 minutes, so it can't be A.
upvoted 9 times
FourOfAKind 3 months, 2 weeks ago
can't be C
upvoted 4 times
smgsi 3 months, 2 weeks ago
It’s not because time limit of lambda is 15 minutes
upvoted 3 times
antropaws Most Recent 1 month ago
omoakin 1 month ago
C. Copy the tasks into AWS Lambda functions. Schedule the Lambda functions by using Amazon EventBridge (Amazon CloudWatch Events)
upvoted 1 times
ruqui 1 month ago
wrong, Lambda maximum runtime is 15 minutes and the tasks run for an hour
upvoted 2 times
KMohsoe 1 month ago
B and D out!
A and C let's think!
AWS Lambda functions are time limited.
So, Option A
upvoted 1 times
lucdt4 1 month ago
AAAAAAAAAAAAAAAAA
because lambda only run within 15 minutes
upvoted 1 times
TariqKipkemei 1 month, 1 week ago
Answer is A.
Could have been C but AWS Lambda functions can be only configured to run up to 15 minutes per execution. While the task in question need an 1hour to run,
upvoted 1 times
question is asking for the LEAST operational overhead. With batch, you have to create the compute environment, create the job queue, create the job definition and create the jobs --> more operational overhead than creating an ASG
upvoted 1 times
A not C
The maximum AWS Lambda function run time is 15 minutes. If a Lambda function runs for longer than 15 minutes, it will be terminated by AWS Lambda. This limit is in place to prevent the Lambda environment from becoming stale and to ensure that resources are available for other functions. If a task requires more than 15 minutes to complete, a different AWS service or architecture may be better suited for the use case.
upvoted 1 times
CCCCCCCCCC
upvoted 1 times
AAAAAAAAA
upvoted 1 times
It must be A!
In general, AWS Lambda can be more cost-effective for smaller, short-lived tasks or for event-driven computing use cases. For long running or computation heavy tasks, AWS Batch can be more cost-effective, as it allows you to provision and manage a more robust computing environment.
upvoted 2 times
I think the problem is that: 1. Amount 1-hour execution. 2. No one common language. So I think the better is B.
upvoted 1 times
A for me, Lambda has 15 minute time out cant be C
upvoted 1 times
dangoooooo 2 months, 2 weeks ago
D is the answer. [The best solution is to create an AMI of the EC2 instance, and then use it as a template for which to launch additional instances using an Auto Scaling Group. This removes the issues of performance, scalability, and redundancy by allowing the EC2 instances to automatically scale and be launched across multiple Availability Zones.]from udemy
upvoted 2 times
wrong! if you setup an AMI that is configured to run all the jobs, then all the instances of the ASG will be running all the jobs at the same time!! this solution won't address any scalability and performance problems
upvoted 1 times
kraken21 2 months, 3 weeks ago
I am leaning towards A because:
Each individual job runs for about 1 hr., not ideal for lambda.
The concern is performance/scalability. If we break these multiple jobs into individual tasks and let AWS batch handle them, we might have less operational overhead to maintain and use the scalability power of AWS batch - Ec2 scaling.
The other options do not address the issue of breaking down multiple jobs running on the same machine. I feel that the programming language keyword is here to confuse us.
GL
upvoted 1 times
Lambda functions are short lived; the Lambda max timeout is 900 seconds (15 minutes). This can be difficult to manage and can cause issues in production applications. We'll take a look at AWS Lambda timeout limits, timeout errors, monitoring timeout errors, and how to apply best practices to handle them effectively
upvoted 1 times
MssP 3 months ago
runs several 1-hour tasks -> No way for Lambda. A is the option.
upvoted 4 times
Question #370 Topic 1
A company runs a public three-tier web application in a VPC. The application runs on Amazon EC2 instances across multiple Availability Zones.
The EC2 instances that run in private subnets need to communicate with a license server over the internet. The company needs a managed solution that minimizes operational maintenance.
Which solution meets these requirements?
A. Provision a NAT instance in a public subnet. Modify each private subnet's route table with a default route that points to the NAT instance.
B. Provision a NAT instance in a private subnet. Modify each private subnet's route table with a default route that points to the NAT instance.
C. Provision a NAT gateway in a public subnet. Modify each private subnet's route table with a default route that points to the NAT gateway.
D. Provision a NAT gateway in a private subnet. Modify each private subnet's route table with a default route that points to the NAT gateway.
Community vote distribution
C (100%)
UnluckyDucky Highly Voted 3 months, 2 weeks ago
"The company needs a managed solution that minimizes operational maintenance" Watch out for NAT instances vs NAT Gateways.
As the company needs a managed solution that minimizes operational maintenance - NAT Gateway is a public subnet is the answer.
upvoted 5 times
lucdt4 Most Recent 1 month ago
C
Nat gateway can't deploy in a private subnet.
upvoted 1 times
TariqKipkemei 1 month, 1 week ago
minimizes operational maintenance = NGW
upvoted 1 times
WherecanIstart 3 months, 1 week ago
C..provision NGW in Public Subnet
upvoted 1 times
cegama543 3 months, 2 weeks ago
taehyeki 3 months, 2 weeks ago
Question #371 Topic 1
A company needs to create an Amazon Elastic Kubernetes Service (Amazon EKS) cluster to host a digital media streaming application. The EKS cluster will use a managed node group that is backed by Amazon Elastic Block Store (Amazon EBS) volumes for storage. The company must
encrypt all data at rest by using a customer managed key that is stored in AWS Key Management Service (AWS KMS). Which combination of actions will meet this requirement with the LEAST operational overhead? (Choose two.)
Use a Kubernetes plugin that uses the customer managed key to perform data encryption.
After creation of the EKS cluster, locate the EBS volumes. Enable encryption by using the customer managed key.
Enable EBS encryption by default in the AWS Region where the EKS cluster will be created. Select the customer managed key as the default key.
Create the EKS cluster. Create an IAM role that has a policy that grants permission to the customer managed key. Associate the role with the EKS cluster.
Store the customer managed key as a Kubernetes secret in the EKS cluster. Use the customer managed key to encrypt the EBS volumes.
Community vote distribution
BD (48%) CD (45%) 6%
asoli Highly Voted 3 months, 1 week ago
https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html#:~:text=encrypted%20Amazon%20EBS%20volumes%20without%20using%20a%20launch%20template%2C%20encrypt%20all%20new
%20Amazon%20EBS%20volumes%20created%20in%20your%20account.
upvoted 7 times
pedroso Most Recent 2 weeks, 4 days ago
I was in doubt between B and C.
You can't "Enable EBS encryption by default in the AWS Region". Enable EBS encryption by default is only possible at Account level, not Region. B is the right option once you can enable encryption on the EBS volume with KMS and custom KMS.
upvoted 1 times
antropaws 6 days, 13 hours ago
Not accurate: "Encryption by default is a Region-specific setting": https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html#encryption-by-default
upvoted 1 times
jayce5 3 weeks, 3 days ago
It's C and D. I tried it in my AWS console.
C seems to have fewer operations ahead compared to B.
upvoted 3 times
nauman001 1 month, 1 week ago
B and C.
Unless the key policy explicitly allows it, you cannot use IAM policies to allow access to a KMS key. Without permission from the key policy, IAM policies that allow permissions have no effect.
upvoted 1 times
kruasan 1 month, 4 weeks ago
B. Manually enable encryption on the intended EBS volumes after ensuring no default changes. Requires manually enabling encryption on the nodes but ensures minimum impact.
Create an IAM role with access to the key to associate with the EKS cluster. This provides key access permission just to the EKS cluster without changing broader IAM permissions.
upvoted 2 times
kruasan 1 month, 4 weeks ago
Using a custom plugin requires installing, managing and troubleshooting the plugin. Significant operational overhead.
C. Modifying the default region encryption could impact other resources with different needs. Should be avoided if possible.
E. Managing Kubernetes secrets for key access requires operations within the EKS cluster. Additional operational complexity.
upvoted 1 times
B&C B&C B&C B&C B&C B&C B&C B&C B&C
upvoted 1 times
Quickly rule out A (which plugin? > overhead) and E because of bad practice
Among B,C,D: B and C are functionally similar > choice must be between B or C, D is fixed
Between B and C: C is out since it set default for all EBS volume in the region, which is more than required and even wrong, say what if other EBS volumes of other applications in the region have different requirement?
upvoted 4 times
After creation of the EKS cluster, locate the EBS volumes. Enable encryption by using the customer managed key.
D. Create the EKS cluster. Create an IAM role that has a policy that grants permission to the customer managed key. Associate the role with the EKS cluster.
Explanation:
Option B is the simplest and most direct way to enable encryption for the EBS volumes associated with the EKS cluster. After the EKS cluster is created, you can manually locate the EBS volumes and enable encryption using the customer managed key through the AWS Management Console, AWS CLI, or SDKs.
Option D involves creating an IAM role with a policy that grants permission to the customer managed key, and then associating that role with the EKS cluster. This allows the EKS cluster to have the necessary permissions to access the customer managed key for encrypting and decrypting data on the EBS volumes. This approach is more automated and can be easily managed through IAM, which provides centralized control and reduces operational overhead.
upvoted 1 times
kraken21 2 months, 3 weeks ago
"The company must encrypt all data at rest by using a customer managed key that is stored in AWS Key Management Service" : All data leans towards option CD. Least operational overhead.
upvoted 1 times
Option C is not necessary as enabling EBS encryption by default will apply to all EBS volumes in the region, not just those associated with the EKS cluster. Additionally, it does not specify the use of a customer managed key.
upvoted 2 times
How is it B? Option C is best practice, you can definitely specify a CMK within KMS when setting the default encryption. Please test it out yourself
upvoted 2 times
Option A is incorrect because it suggests using a Kubernetes plugin, which may increase operational overhead.
Option D is incorrect because it suggests creating an IAM role and associating it with the EKS cluster, which is not necessary for this scenario.
Option E is incorrect because it suggests storing the customer managed key as a Kubernetes secret, which is not the best practice for managing sensitive data such as encryption keys.
upvoted 1 times
maver144 2 months, 3 weeks ago
"Option D is incorrect because it suggests creating an IAM role and associating it with the EKS cluster, which is not necessary for this scenario."
Then your EKS cluster would not be able to access encrypted EBS volumes.
upvoted 1 times
UnluckyDucky 3 months, 1 week ago
B & D Do exactly what's required in a very simple way with the least overhead.
Options C affects all EBS volumes in the region which is absolutely not necessary here.
upvoted 4 times
Maximus007 3 months, 2 weeks ago
Was thinking about CD vs CE, but CD least ovearhead
upvoted 1 times
Karlos99 3 months, 2 weeks ago
taehyeki 3 months, 2 weeks ago
Question #372 Topic 1
A company wants to migrate an Oracle database to AWS. The database consists of a single table that contains millions of geographic information systems (GIS) images that are high resolution and are identified by a geographic code.
When a natural disaster occurs, tens of thousands of images get updated every few minutes. Each geographic code has a single image or row that is associated with it. The company wants a solution that is highly available and scalable during such events.
Which solution meets these requirements MOST cost-effectively?
A. Store the images and geographic codes in a database table. Use Oracle running on an Amazon RDS Multi-AZ DB instance.
B. Store the images in Amazon S3 buckets. Use Amazon DynamoDB with the geographic code as the key and the image S3 URL as the value.
C. Store the images and geographic codes in an Amazon DynamoDB table. Configure DynamoDB Accelerator (DAX) during times of high load.
D. Store the images in Amazon S3 buckets. Store geographic codes and image S3 URLs in a database table. Use Oracle running on an Amazon RDS Multi-AZ DB instance.
Community vote distribution
D (58%) B (42%)
Karlos99 Highly Voted 3 months, 2 weeks ago
The company wants a solution that is highly available and scalable
upvoted 8 times
[Removed] 2 months, 4 weeks ago
But DynamoDB is also highly available and scalable https://aws.amazon.com/dynamodb/faqs/#:~:text=DynamoDB%20automatically%20scales%20throughput%20capacity,high%20availability%20a nd%20data%20durability.
upvoted 1 times
pbpally 1 month, 2 weeks ago
Yes but has a size limit at 400kb so theoretically it could store images but it's not a plausible solution.
upvoted 1 times
ruqui 1 month ago
It doesn't matter the size limit of DynamoDB!!!! The images are saved in S3 buckets. Right answer is B
upvoted 2 times
joehong Most Recent 1 week, 3 days ago
"A company wants to migrate an Oracle database to AWS"
upvoted 2 times
secdgs 1 week, 5 days ago
D: Wrorng
if you caluate License Oracle Database, It is not cost-effectively. Multi-AZ is not scalable and if you set scalable, you need more license for Oracle database.
upvoted 2 times
secdgs 2 weeks, 1 day ago
D. wrong because RDS with multi-AZ not autoscale and guarantee database performance when "natural disaster occurs, tens of thousands of images get updated every few minutes"
upvoted 1 times
Dun6 2 weeks, 3 days ago
The images are stored in S3. It is the metadata of the object that is stored in DynamoDB which is obviously less than 400kb. DynamoDB key-value pair
upvoted 1 times
MostafaWardany 2 weeks, 3 days ago
I voted for D, highly available and scalable
upvoted 1 times
My option is D. Why choose B? "_"
upvoted 4 times
TariqKipkemei 1 month, 1 week ago
why would you want to change an SQL DB into a NoSQL DB.it involves code changes and rewrite of the stored procedures. For me D is the best option. You get read scalability with two readable standby DB instances by deploying the Multi-AZ DB cluster.
upvoted 3 times
If you change to store image on S3, you need change code. And DB is only 1 table, SQL or NoSQL is not much difference because no table relationships.
upvoted 2 times
This uses:
S3 for inexpensive, scalable image storage
DynamoDB as an index, which can scale seamlessly and cost-effectively
No expensive database storage/compute required
upvoted 2 times
A company wants to migrate an Oracle database to AWS.
ANS D
upvoted 3 times
Guys, "A company wants to migrate an Oracle database to AWS" Isn't Oracle SQL based? So doesn't it mean that DynamoDB is rolled out?
upvoted 4 times
Yes, 100%.
upvoted 2 times
Simple use case, highly available, and scalable -> Choose DynamoDB over RDS in terms of cost.
upvoted 2 times
B, because its a KEY-VALUE scenario
upvoted 2 times
Maximus007 3 months, 2 weeks ago
According to ChatGPT
upvoted 2 times
Option B is the right answer . You cannot store high resolution images in DynamoDB due to its limitation - Maximum size of an item is 400KB
upvoted 3 times
You said that DynamoDB has limitation and maximum size of an item is 400 KB. But the scenario stated "contains millions of geographic information systems (GIS) images that are high resolution and are identified by a geographic code", so the answer must not option B, right? As high resolution images could be more than 400 KB of size. So, DynamoDB is not the right answer here I go for option D.
upvoted 1 times
In DynamoDB you will store the geographic code and the URL, not the image so it will be less than 400Kb. You will provide tens of thousands request every few minutes, I think DynamoDB will work better than Oracle DDBB
upvoted 3 times
Michal_L_95 3 months, 2 weeks ago
And what about that they are using Oracle DB? Is not it easier to move to RDS which will be behaving in similar way which will not keep images but only associated codes and S3 urls.
In my opinion it is more cost-effective to do it with RDS.
upvoted 2 times
Michal_L_95 3 months, 2 weeks ago
Option D
upvoted 1 times
taehyeki 3 months, 2 weeks ago
Question #373 Topic 1
A company has an application that collects data from IoT sensors on automobiles. The data is streamed and stored in Amazon S3 through
Amazon Kinesis Data Firehose. The data produces trillions of S3 objects each year. Each morning, the company uses the data from the previous 30 days to retrain a suite of machine learning (ML) models.
Four times each year, the company uses the data from the previous 12 months to perform analysis and train other ML models. The data must be available with minimal delay for up to 1 year. After 1 year, the data must be retained for archival purposes.
Which storage solution meets these requirements MOST cost-effectively?
Use the S3 Intelligent-Tiering storage class. Create an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive after 1 year.
Use the S3 Intelligent-Tiering storage class. Configure S3 Intelligent-Tiering to automatically move objects to S3 Glacier Deep Archive after 1 year.
Use the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive after 1 year.
Use the S3 Standard storage class. Create an S3 Lifecycle policy to transition objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days, and then to S3 Glacier Deep Archive after 1 year.
Community vote distribution
D (89%) 6%
UnluckyDucky Highly Voted 3 months, 2 weeks ago
Access patterns is given, therefore D is the most logical answer.
Intelligent tiering is for random, unpredictable access.
upvoted 6 times
ealpuche 1 month, 2 weeks ago
You are missing: <<The data must be available with minimal delay for up to 1 year. After one year, the data must be retained for archival purposes.>> You are secure that data after 1 year is not accessible anymore.
upvoted 1 times
TariqKipkemei Most Recent 1 month, 1 week ago
First 30 days data accessed every morning = S3 Standard
Beyond 30 days data accessed quarterly = S3 Standard-Infrequent Access Beyond 1 year data retained = S3 Glacier Deep Archive
upvoted 4 times
ealpuche 1 month, 2 weeks ago
Option A meets the requirements most cost-effectively. The S3 Intelligent-Tiering storage class provides automatic tiering of objects between the S3 Standard and S3 Standard-Infrequent Access (S3 Standard-IA) tiers based on changing access patterns, which helps optimize costs. The S3 Lifecycle policy can be used to transition objects to S3 Glacier Deep Archive after 1 year for archival purposes. This solution also meets the requirement for minimal delay in accessing data for up to 1 year. Option B is not cost-effective because it does not include the transition of data to S3 Glacier Deep Archive after 1 year. Option C is not the best solution because S3 Standard-IA is not designed for long-term archival purposes and incurs higher storage costs. Option D is also not the most cost-effective solution as it transitions objects to the S3 Standard-IA tier after 30 days, which is unnecessary for the requirement to retrain the suite of ML models each morning using data from the previous 30 days.
upvoted 1 times
KAUS2 3 months, 2 weeks ago
Agree with UnluckyDucky , the correct option is D
upvoted 1 times
fkie4 3 months, 2 weeks ago
Should be D. see this:
upvoted 2 times
Nithin1119 3 months, 2 weeks ago
fkie4 3 months, 2 weeks ago
hello!!??
upvoted 2 times
taehyeki 3 months, 2 weeks ago
taehyeki 3 months, 2 weeks ago
D because:
First 30 days- data access every morning ( predictable and frequently) – S3 standard
After 30 days, accessed 4 times a year – S3 infrequently access
Data preserved- S3 Gllacier Deep Archive
upvoted 6 times
Question #374 Topic 1
A company is running several business applications in three separate VPCs within the us-east-1 Region. The applications must be able to communicate between VPCs. The applications also must be able to consistently send hundreds of gigabytes of data each day to a latency-sensitive application that runs in a single on-premises data center.
A solutions architect needs to design a network connectivity solution that maximizes cost-effectiveness. Which solution meets these requirements?
Configure three AWS Site-to-Site VPN connections from the data center to AWS. Establish connectivity by configuring one VPN connection for each VPC.
Launch a third-party virtual network appliance in each VPC. Establish an IPsec VPN tunnel between the data center and each virtual appliance.
Set up three AWS Direct Connect connections from the data center to a Direct Connect gateway in us-east-1. Establish connectivity by configuring each VPC to use one of the Direct Connect connections.
Set up one AWS Direct Connect connection from the data center to AWS. Create a transit gateway, and attach each VPC to the transit gateway. Establish connectivity between the Direct Connect connection and the transit gateway.
Community vote distribution
D (100%)
alexandercamachop 3 weeks, 5 days ago
Transit GW, is a hub for connecting all VPCs.
Direct Connect is expensive, therefor only 1 of them connected to the Transit GW (Hub for all our VPCs that we connect to it)
upvoted 1 times
KMohsoe 1 month ago
Sivasaa 2 months ago
Can someone tell why option C will not work here
upvoted 2 times
jdamian 1 month, 3 weeks ago
cost-effectiveness, 3 DC are more than 1 (more expensive). There is no need to connect more than 1 DC.
upvoted 1 times
SkyZeroZx 2 months ago
cost-effectiveness D
upvoted 1 times
WherecanIstart 3 months, 1 week ago
Transit Gateway will achieve this result..
upvoted 3 times
Karlos99 3 months, 2 weeks ago
maximizes cost-effectiveness
upvoted 2 times
taehyeki 3 months, 2 weeks ago
Question #375 Topic 1
An ecommerce company is building a distributed application that involves several serverless functions and AWS services to complete order-processing tasks. These tasks require manual approvals as part of the workflow. A solutions architect needs to design an architecture for the
order-processing application. The solution must be able to combine multiple AWS Lambda functions into responsive serverless applications. The solution also must orchestrate data and services that run on Amazon EC2 instances, containers, or on-premises servers.
Which solution will meet these requirements with the LEAST operational overhead?
Use AWS Step Functions to build the application.
Integrate all the application components in an AWS Glue job.
Use Amazon Simple Queue Service (Amazon SQS) to build the application.
Use AWS Lambda functions and Amazon EventBridge events to build the application.
Community vote distribution
A (100%)
BeeKayEnn 3 months, 1 week ago
Key: Distributed Application Processing, Microservices orchestration (Orchestrate Data and Services) A would be the best fit.
AWS Step Functions is a visual workflow service that helps developers use AWS services to build distributed applications, automate processes, orchestrate microservices, and create data and machine learning (ML) pipelines.
Reference: https://aws.amazon.com/step-functions/#:~:text=AWS%20Step%20Functions%20is%20a,machine%20learning%20(ML)%20pipelines.
upvoted 2 times
COTIT 3 months, 1 week ago
Approval is explicit for the solution. -> "A common use case for AWS Step Functions is a task that requires human intervention (for example, an approval process). Step Functions makes it easy to coordinate the components of distributed applications as a series of steps in a visual workflow called a state machine. You can quickly build and run state machines to execute the steps of your application in a reliable and scalable fashion. (https://aws.amazon.com/pt/blogs/compute/implementing-serverless-manual-approval-steps-in-aws-step-functions-and-amazon-api-gateway/)"
upvoted 1 times
kinglong12 3 months, 2 weeks ago
AWS Step Functions is a fully managed service that makes it easy to build applications by coordinating the components of distributed applications and microservices using visual workflows. With Step Functions, you can combine multiple AWS Lambda functions into responsive serverless applications and orchestrate data and services that run on Amazon EC2 instances, containers, or on-premises servers. Step Functions also allows for manual approvals as part of the workflow. This solution meets all the requirements with the least operational overhead.
upvoted 3 times
ktulu2602 3 months, 2 weeks ago
Option A: Use AWS Step Functions to build the application.
AWS Step Functions is a serverless workflow service that makes it easy to coordinate distributed applications and microservices using visual workflows. It is an ideal solution for designing architectures for distributed applications that involve multiple AWS services and serverless functions, as it allows us to orchestrate the flow of our application components using visual workflows. AWS Step Functions also integrates with other AWS services like AWS Lambda, Amazon EC2, and Amazon ECS, and it has built-in error handling and retry mechanisms. This option provides a serverless solution with the least operational overhead for building the application.
upvoted 3 times
Question #376 Topic 1
A company has launched an Amazon RDS for MySQL DB instance. Most of the connections to the database come from serverless applications.
Application traffic to the database changes significantly at random intervals. At times of high demand, users report that their applications experience database connection rejection errors.
Which solution will resolve this issue with the LEAST operational overhead?
Create a proxy in RDS Proxy. Configure the users’ applications to use the DB instance through RDS Proxy.
Deploy Amazon ElastiCache for Memcached between the users’ applications and the DB instance.
Migrate the DB instance to a different instance class that has higher I/O capacity. Configure the users’ applications to use the new DB instance.
Configure Multi-AZ for the DB instance. Configure the users’ applications to switch between the DB instances.
Community vote distribution
A (100%)
antropaws 1 month ago Wait, why not B????? upvoted 1 times
roxx529 1 month, 1 week ago
To reduce application failures resulting from database connection timeouts, the best solution is to enable RDS Proxy on the RDS DB instances
upvoted 1 times
COTIT 3 months, 1 week ago
Many applications, including those built on modern serverless architectures, can have a large number of open connections to the database server and may open and close database connections at a high rate, exhausting database memory and compute resources. Amazon RDS Proxy allows applications to pool and share connections established with the database, improving database efficiency and application scalability. (https://aws.amazon.com/pt/rds/proxy/)
upvoted 3 times
ktulu2602 3 months, 2 weeks ago
The correct solution for this scenario would be to create a proxy in RDS Proxy. RDS Proxy allows for managing thousands of concurrent database connections, which can help reduce connection errors. RDS Proxy also provides features such as connection pooling, read/write splitting, and retries. This solution requires the least operational overhead as it does not involve migrating to a different instance class or setting up a new cache layer. Therefore, option A is the correct answer.
upvoted 4 times
Question #377 Topic 1
A company recently deployed a new auditing system to centralize information about operating system versions, patching, and installed software for Amazon EC2 instances. A solutions architect must ensure all instances provisioned through EC2 Auto Scaling groups successfully send
reports to the auditing system as soon as they are launched and terminated. Which solution achieves these goals MOST efficiently?
Use a scheduled AWS Lambda function and run a script remotely on all EC2 instances to send data to the audit system.
Use EC2 Auto Scaling lifecycle hooks to run a custom script to send data to the audit system when instances are launched and terminated.
Use an EC2 Auto Scaling launch configuration to run a custom script through user data to send data to the audit system when instances are launched and terminated.
Run a custom script on the instance operating system to send data to the audit system. Configure the script to be invoked by the EC2 Auto Scaling group when the instance starts and is terminated.
Community vote distribution
B (100%)
WherecanIstart 3 months, 1 week ago
COTIT 3 months, 1 week ago
Amazon EC2 Auto Scaling offers the ability to add lifecycle hooks to your Auto Scaling groups. These hooks let you create solutions that are aware of events in the Auto Scaling instance lifecycle, and then perform a custom action on instances when the corresponding lifecycle event occurs. (https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html)
upvoted 2 times
fkie4 3 months, 2 weeks ago
it is B. read this: https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html
upvoted 1 times
ktulu2602 3 months, 2 weeks ago
The most efficient solution for this scenario is to use EC2 Auto Scaling lifecycle hooks to run a custom script to send data to the audit system when instances are launched and terminated. The lifecycle hook can be used to delay instance termination until the script has completed, ensuring that all data is sent to the audit system before the instance is terminated. This solution is more efficient than using a scheduled AWS Lambda function, which would require running the function periodically and may not capture all instances launched and terminated within the interval. Running a custom script through user data is also not an optimal solution, as it may not guarantee that all instances send data to the audit system. Therefore, option B is the correct answer.
upvoted 4 times
Question #378 Topic 1
A company is developing a real-time multiplayer game that uses UDP for communications between the client and servers in an Auto Scaling group. Spikes in demand are anticipated during the day, so the game server platform must adapt accordingly. Developers want to store gamer scores and other non-relational data in a database solution that will scale without intervention.
Which solution should a solutions architect recommend?
Use Amazon Route 53 for traffic distribution and Amazon Aurora Serverless for data storage.
Use a Network Load Balancer for traffic distribution and Amazon DynamoDB on-demand for data storage.
Use a Network Load Balancer for traffic distribution and Amazon Aurora Global Database for data storage.
Use an Application Load Balancer for traffic distribution and Amazon DynamoDB global tables for data storage.
Community vote distribution
B (100%)
TariqKipkemei 1 month, 1 week ago
UDP = NLB
Non-relational data = Dynamo DB
upvoted 1 times
elearningtakai 3 months ago
Option B is a good fit because a Network Load Balancer can handle UDP traffic, and Amazon DynamoDB on-demand can provide automatic scaling without intervention
upvoted 1 times
KAUS2 3 months, 2 weeks ago
aragon_saa 3 months, 2 weeks ago
B
upvoted 1 times
Kenp1192 3 months, 2 weeks ago
B
Because NLB can handle UDP and DynamoDB is Non-Relational
upvoted 1 times
fruto123 3 months, 2 weeks ago
key words - UDP, non-relational data
answers - NLB for UDP application, DynamoDB for non-relational data
upvoted 3 times
Question #379 Topic 1
A company hosts a frontend application that uses an Amazon API Gateway API backend that is integrated with AWS Lambda. When the API receives requests, the Lambda function loads many libraries. Then the Lambda function connects to an Amazon RDS database, processes the
data, and returns the data to the frontend application. The company wants to ensure that response latency is as low as possible for all its users with the fewest number of changes to the company's operations.
Which solution will meet these requirements?
Establish a connection between the frontend application and the database to make queries faster by bypassing the API.
Configure provisioned concurrency for the Lambda function that handles the requests.
Cache the results of the queries in Amazon S3 for faster retrieval of similar datasets.
Increase the size of the database to increase the number of connections Lambda can establish at one time.
Community vote distribution
B (100%)
UnluckyDucky Highly Voted 3 months, 2 weeks ago
Key: the Lambda function loads many libraries
Configuring provisioned concurrency would get rid of the "cold start" of the function therefore speeding up the proccess.
upvoted 9 times
kampatra Highly Voted 3 months, 2 weeks ago
Provisioned concurrency – Provisioned concurrency initializes a requested number of execution environments so that they are prepared to respond immediately to your function's invocations. Note that configuring provisioned concurrency incurs charges to your AWS account.
upvoted 6 times
elearningtakai Most Recent 3 months ago
Answer B is correct https://docs.aws.amazon.com/lambda/latest/dg/provisioned-concurrency.html Answer C: need to modify the application
upvoted 4 times
elearningtakai 3 months ago
This is relevant to "cold start" with keywords: "Lambda function loads many libraries"
upvoted 1 times
Karlos99 3 months, 2 weeks ago
https://docs.aws.amazon.com/lambda/latest/dg/provisioned-concurrency.html
upvoted 3 times
Question #380 Topic 1
A company is migrating its on-premises workload to the AWS Cloud. The company already uses several Amazon EC2 instances and Amazon RDS DB instances. The company wants a solution that automatically starts and stops the EC2 instances and DB instances outside of business hours. The solution must minimize cost and infrastructure maintenance.
Which solution will meet these requirements?
Scale the EC2 instances by using elastic resize. Scale the DB instances to zero outside of business hours.
Explore AWS Marketplace for partner solutions that will automatically start and stop the EC2 instances and DB instances on a schedule.
Launch another EC2 instance. Configure a crontab schedule to run shell scripts that will start and stop the existing EC2 instances and DB instances on a schedule.
Create an AWS Lambda function that will start and stop the EC2 instances and DB instances. Configure Amazon EventBridge to invoke the Lambda function on a schedule.
Community vote distribution
D (100%)
ktulu2602 Highly Voted 3 months, 2 weeks ago
The most efficient solution for automatically starting and stopping EC2 instances and DB instances on a schedule while minimizing cost and infrastructure maintenance is to create an AWS Lambda function and configure Amazon EventBridge to invoke the function on a schedule.
Option A, scaling EC2 instances by using elastic resize and scaling DB instances to zero outside of business hours, is not feasible as DB instances cannot be scaled to zero.
Option B, exploring AWS Marketplace for partner solutions, may be an option, but it may not be the most efficient solution and could potentially add additional costs.
Option C, launching another EC2 instance and configuring a crontab schedule to run shell scripts that will start and stop the existing EC2 instances and DB instances on a schedule, adds unnecessary infrastructure and maintenance.
upvoted 9 times
WherecanIstart Most Recent 3 months, 1 week ago
Minimize cost and maintenance...
upvoted 1 times
dcp 3 months, 2 weeks ago
DDDDDDDDDDD
upvoted 1 times
Question #381 Topic 1
A company hosts a three-tier web application that includes a PostgreSQL database. The database stores the metadata from documents. The company searches the metadata for key terms to retrieve documents that the company reviews in a report each month. The documents are stored in Amazon S3. The documents are usually written only once, but they are updated frequently.
The reporting process takes a few hours with the use of relational queries. The reporting process must not prevent any document modifications or the addition of new documents. A solutions architect needs to implement a solution to speed up the reporting process.
Which solution will meet these requirements with the LEAST amount of change to the application code?
Set up a new Amazon DocumentDB (with MongoDB compatibility) cluster that includes a read replica. Scale the read replica to generate the reports.
Set up a new Amazon Aurora PostgreSQL DB cluster that includes an Aurora Replica. Issue queries to the Aurora Replica to generate the reports.
Set up a new Amazon RDS for PostgreSQL Multi-AZ DB instance. Configure the reporting module to query the secondary RDS node so that the reporting module does not affect the primary node.
Set up a new Amazon DynamoDB table to store the documents. Use a fixed write capacity to support new document entries. Automatically scale the read capacity to support the reports.
Community vote distribution
B (92%) 8%
KMohsoe 1 month ago
wRhlH 6 days, 18 hours ago
"The reporting process takes a few hours with the use of RELATIONAL queries."
upvoted 1 times
TariqKipkemei 1 month, 1 week ago
Load balancing = Read replica High availability = Multi AZ
upvoted 2 times
lexotan 2 months ago
B is the right one. why admin does not correct these wrong answers?
upvoted 1 times
imvb88 2 months, 1 week ago
The reporting process queries the metadata (not the documents) and use relational queries-> A, D out C: wrong since secondary RDS node in MultiAZ setup is in standby mode, not available for querying
B: reporting using a Replica is a design pattern. Using Aurora is an exam pattern.
upvoted 2 times
WherecanIstart 3 months, 1 week ago
Maximus007 3 months, 2 weeks ago
While both B&D seems to be a relevant, ChatGPT suggest B as a correct one
upvoted 1 times
cegama543 3 months, 2 weeks ago
Option B (Set up a new Amazon Aurora PostgreSQL DB cluster that includes an Aurora Replica. Issue queries to the Aurora Replica to generate the reports) is the best option for speeding up the reporting process for a three-tier web application that includes a PostgreSQL database storing metadata from documents, while not impacting document modifications or additions, with the least amount of change to the application code.
upvoted 2 times
UnluckyDucky 3 months, 2 weeks ago
"LEAST amount of change to the application code"
Aurora is a relational database, it supports PostgreSQL and with the help of read replicas we can issue the reporting proccess that take several hours to the replica, therefore not affecting the primary node which can handle new writes or document modifications.
upvoted 1 times
Ashukaushal619 3 months, 2 weeks ago
its D only ,recorrected
upvoted 1 times
Ashukaushal619 3 months, 2 weeks ago
Question #382 Topic 1
A company has a three-tier application on AWS that ingests sensor data from its users’ devices. The traffic flows through a Network Load Balancer (NLB), then to Amazon EC2 instances for the web tier, and finally to EC2 instances for the application tier. The application tier makes calls to a database.
What should a solutions architect do to improve the security of the data in transit?
Configure a TLS listener. Deploy the server certificate on the NLB.
Configure AWS Shield Advanced. Enable AWS WAF on the NLB.
Change the load balancer to an Application Load Balancer (ALB). Enable AWS WAF on the ALB.
Encrypt the Amazon Elastic Block Store (Amazon EBS) volume on the EC2 instances by using AWS Key Management Service (AWS KMS).
Community vote distribution
A (100%)
fruto123 Highly Voted 3 months, 2 weeks ago
Network Load Balancers now support TLS protocol. With this launch, you can now offload resource intensive decryption/encryption from your application servers to a high throughput, and low latency Network Load Balancer. Network Load Balancer is now able to terminate TLS traffic and set up connections with your targets either over TCP or TLS protocol.
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-tls-listener.html https://exampleloadbalancer.com/nlbtls_demo.html
upvoted 10 times
imvb88 Highly Voted 2 months, 1 week ago
security of data in transit -> think of SSL/TLS. Check: NLB supports TLS https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-tls-listener.html
B (DDoS), C (SQL Injection), D (EBS) is for data at rest.
upvoted 6 times
klayytech Most Recent 2 months, 4 weeks ago
To improve the security of data in transit, you can configure a TLS listener on the Network Load Balancer (NLB) and deploy the server certificate on it. This will encrypt traffic between clients and the NLB. You can also use AWS Certificate Manager (ACM) to provision, manage, and deploy SSL/TLS certificates for use with AWS services and your internal connected resources1.
You can also change the load balancer to an Application Load Balancer (ALB) and enable AWS WAF on it. AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources3.
the A and C correct without transit but the need to improve the security of the data in transit? so he need SSL/TLS certificates
upvoted 1 times
Maximus007 3 months, 2 weeks ago
Question #383 Topic 1
A company is planning to migrate a commercial off-the-shelf application from its on-premises data center to AWS. The software has a software licensing model using sockets and cores with predictable capacity and uptime requirements. The company wants to use its existing licenses, which were purchased earlier this year.
Which Amazon EC2 pricing option is the MOST cost-effective?
Dedicated Reserved Hosts
Dedicated On-Demand Hosts
Dedicated Reserved Instances
Dedicated On-Demand Instances
Community vote distribution
A (100%)
imvb88 2 months, 1 week ago
Bring custom purchased licenses to AWS -> Dedicated Host -> C,D out Need cost effective solution -> "reserved" -> A
upvoted 3 times
imvb88 2 months, 1 week ago
https://aws.amazon.com/ec2/dedicated-hosts/
Amazon EC2 Dedicated Hosts allow you to use your eligible software licenses from vendors such as Microsoft and Oracle on Amazon EC2, so that you get the flexibility and cost effectiveness of using your own licenses, but with the resiliency, simplicity and elasticity of AWS.
upvoted 1 times
fkie4 3 months, 2 weeks ago
"predictable capacity and uptime requirements" means "Reserved" "sockets and cores" means "dedicated host"
upvoted 4 times
aragon_saa 3 months, 2 weeks ago
A
upvoted 1 times
fruto123 3 months, 2 weeks ago
Dedicated Host Reservations provide a billing discount compared to running On-Demand Dedicated Hosts. Reservations are available in three payment options.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-hosts-overview.html
upvoted 3 times
Kenp1192 3 months, 2 weeks ago
A
is the most cost effective
upvoted 1 times
Question #384 Topic 1
A company runs an application on Amazon EC2 Linux instances across multiple Availability Zones. The application needs a storage layer that is
highly available and Portable Operating System Interface (POSIX)-compliant. The storage layer must provide maximum data durability and must be shareable across the EC2 instances. The data in the storage layer will be accessed frequently for the first 30 days and will be accessed
infrequently after that time.
Which solution will meet these requirements MOST cost-effectively?
Use the Amazon S3 Standard storage class. Create an S3 Lifecycle policy to move infrequently accessed data to S3 Glacier.
Use the Amazon S3 Standard storage class. Create an S3 Lifecycle policy to move infrequently accessed data to S3 Standard-Infrequent Access (S3 Standard-IA).
Use the Amazon Elastic File System (Amazon EFS) Standard storage class. Create a lifecycle management policy to move infrequently accessed data to EFS Standard-Infrequent Access (EFS Standard-IA).
Use the Amazon Elastic File System (Amazon EFS) One Zone storage class. Create a lifecycle management policy to move infrequently accessed data to EFS One Zone-Infrequent Access (EFS One Zone-IA).
Community vote distribution
C (85%) D (15%)
RainWhisper 4 days, 21 hours ago
Amazon Elastic File System (Amazon EFS) Standard storage class = "maximum data durability"
upvoted 1 times
Yadav_Sanjay 1 week, 3 days ago
D - It should be cost-effective
upvoted 1 times
Abrar2022 2 weeks, 2 days ago
POSIX file system access = only Amazon EFS supports
upvoted 1 times
TariqKipkemei 1 month, 1 week ago
Multi AZ = both EFS and S3 support Storage classes = both EFS and S3 support
POSIX file system access = only Amazon EFS supports
upvoted 2 times
imvb88 2 months, 1 week ago
POSIX + sharable across EC2 instances --> EFS --> A, B out
Instances run across multiple AZ -> C is needed.
upvoted 1 times
WherecanIstart 3 months, 1 week ago
Linux based system points to EFS plus POSIX-compliant is also EFS related.
upvoted 2 times
fkie4 3 months, 2 weeks ago
"POSIX-compliant" means EFS.
also, file system can be shared with multiple EC2 instances means "EFS"
upvoted 3 times
KAUS2 3 months, 2 weeks ago
Option C is the correct answer .
upvoted 1 times
Ruhi02 3 months, 2 weeks ago
Answer c : https://aws.amazon.com/efs/features/infrequent-access/
upvoted 1 times
ktulu2602 3 months, 2 weeks ago
Option A, using S3, is not a good option as it is an object storage service and not POSIX-compliant. Option B, using S3 Standard-IA, is also not a good option as it is an object storage service and not POSIX-compliant. Option D, using EFS One Zone, is not the best option for high availability since it is only stored in a single AZ.
upvoted 1 times
Question #385 Topic 1
A solutions architect is creating a new VPC design. There are two public subnets for the load balancer, two private subnets for web servers, and two private subnets for MySQL. The web servers use only HTTPS. The solutions architect has already created a security group for the load
balancer allowing port 443 from 0.0.0.0/0. Company policy requires that each resource has the least access required to still be able to perform its tasks.
Which additional configuration strategy should the solutions architect use to meet these requirements?
Create a security group for the web servers and allow port 443 from 0.0.0.0/0. Create a security group for the MySQL servers and allow port 3306 from the web servers security group.
Create a network ACL for the web servers and allow port 443 from 0.0.0.0/0. Create a network ACL for the MySQL servers and allow port 3306 from the web servers security group.
Create a security group for the web servers and allow port 443 from the load balancer. Create a security group for the MySQL servers and allow port 3306 from the web servers security group.
Create a network ACL for the web servers and allow port 443 from the load balancer. Create a network ACL for the MySQL servers and allow port 3306 from the web servers security group.
Community vote distribution
C (100%)
elearningtakai 3 months ago
Option C is the correct choice.
upvoted 1 times
WherecanIstart 3 months, 1 week ago
Load balancer is public facing accepting all traffic coming towards the VPC (0.0.0.0/0). The web server needs to trust traffic originating from the ALB. The DB will only trust traffic originating from the Web server on port 3306 for Mysql
upvoted 4 times
fkie4 3 months, 2 weeks ago
Just C. plain and simple
upvoted 1 times
aragon_saa 3 months, 2 weeks ago
C
upvoted 2 times
taehyeki 3 months, 2 weeks ago
cccccc
upvoted 1 times
Question #386 Topic 1
An ecommerce company is running a multi-tier application on AWS. The front-end and backend tiers both run on Amazon EC2, and the database
runs on Amazon RDS for MySQL. The backend tier communicates with the RDS instance. There are frequent calls to return identical datasets from the database that are causing performance slowdowns.
Which action should be taken to improve the performance of the backend?
Implement Amazon SNS to store the database calls.
Implement Amazon ElastiCache to cache the large datasets.
Implement an RDS for MySQL read replica to cache database calls.
Implement Amazon Kinesis Data Firehose to stream the calls to the database.
Community vote distribution
B (100%)
elearningtakai Highly Voted 3 months ago
the best solution is to implement Amazon ElastiCache to cache the large datasets, which will store the frequently accessed data in memory, allowing for faster retrieval times. This can help to alleviate the frequent calls to the database, reduce latency, and improve the overall performance of the backend tier.
upvoted 5 times
Abrar2022 Most Recent 2 weeks, 2 days ago
Thanks Tariq for the simplified answer below:
frequent identical calls = ElastiCache
upvoted 1 times
TariqKipkemei 1 month ago frequent identical calls = ElastiCache upvoted 1 times
Mikebonsi70 3 months ago Tricky question, anyway. upvoted 2 times
Mikebonsi70 3 months ago
Yes, cashing is the solution but is Elasticache compatible with RDS MySQL DB? So, what about the answer C with a DB read replica? For me it's C.
upvoted 1 times
aragon_saa 3 months, 2 weeks ago
B
upvoted 1 times
fruto123 3 months, 2 weeks ago
Key term is identical datasets from the database it means caching can solve this issue by cached in frequently used dataset from DB
upvoted 3 times
Question #387 Topic 1
A new employee has joined a company as a deployment engineer. The deployment engineer will be using AWS CloudFormation templates to create multiple AWS resources. A solutions architect wants the deployment engineer to perform job activities while following the principle of least privilege.
Which combination of actions should the solutions architect take to accomplish this goal? (Choose two.)
Have the deployment engineer use AWS account root user credentials for performing AWS CloudFormation stack operations.
Create a new IAM user for the deployment engineer and add the IAM user to a group that has the PowerUsers IAM policy attached.
Create a new IAM user for the deployment engineer and add the IAM user to a group that has the AdministratorAccess IAM policy attached.
Create a new IAM user for the deployment engineer and add the IAM user to a group that has an IAM policy that allows AWS CloudFormation actions only.
Create an IAM role for the deployment engineer to explicitly define the permissions specific to the AWS CloudFormation stack and launch stacks using that IAM role.
Community vote distribution
DE (100%)
alexandercamachop 3 weeks, 5 days ago
Option D, creating a new IAM user and adding them to a group with an IAM policy that allows AWS CloudFormation actions only, ensures that the deployment engineer has the necessary permissions to perform AWS CloudFormation operations while limiting access to other resources and actions. This aligns with the principle of least privilege by providing the minimum required permissions for their job activities.
Option E, creating an IAM role with specific permissions for AWS CloudFormation stack operations and allowing the deployment engineer to assume that role, is another valid approach. By using an IAM role, the deployment engineer can assume the role when necessary, granting them temporary permissions to perform CloudFormation actions. This provides a level of separation and limits the permissions granted to the engineer to only the required CloudFormation operations.
upvoted 1 times
Babaaaaa 3 weeks, 6 days ago
elearningtakai 3 months ago
D & E are a good choices
upvoted 1 times
aragon_saa 3 months, 2 weeks ago
D, E
upvoted 2 times
fruto123 3 months, 2 weeks ago
Question #388 Topic 1
A company is deploying a two-tier web application in a VPC. The web tier is using an Amazon EC2 Auto Scaling group with public subnets that
span multiple Availability Zones. The database tier consists of an Amazon RDS for MySQL DB instance in separate private subnets. The web tier requires access to the database to retrieve product information.
The web application is not working as intended. The web application reports that it cannot connect to the database. The database is confirmed to be up and running. All configurations for the network ACLs, security groups, and route tables are still in their default states.
What should a solutions architect recommend to fix the application?
Add an explicit rule to the private subnet’s network ACL to allow traffic from the web tier’s EC2 instances.
Add a route in the VPC route table to allow traffic between the web tier’s EC2 instances and the database tier.
Deploy the web tier's EC2 instances and the database tier’s RDS instance into two separate VPCs, and configure VPC peering.
Add an inbound rule to the security group of the database tier’s RDS instance to allow traffic from the web tiers security group.
Community vote distribution
D (100%)
smartegnine 4 days, 17 hours ago
Security Groups are tied on instance where as network ACL are tied to Subnet.
upvoted 1 times
TariqKipkemei 1 month ago
Security group defaults block all inbound traffic..Add an inbound rule to the security group of the database tier’s RDS instance to allow traffic from the web tiers security group
upvoted 2 times
elearningtakai 3 months ago
By default, all inbound traffic to an RDS instance is blocked. Therefore, an inbound rule needs to be added to the security group of the RDS instance to allow traffic from the security group of the web tier's EC2 instances.
upvoted 2 times
Russs99 3 months ago
D is the correct answer
upvoted 1 times
aragon_saa 3 months, 2 weeks ago
D
upvoted 1 times
KAUS2 3 months, 2 weeks ago
D is correct option
upvoted 1 times
taehyeki 3 months, 2 weeks ago
Question #389 Topic 1
A company has a large dataset for its online advertising business stored in an Amazon RDS for MySQL DB instance in a single Availability Zone. The company wants business reporting queries to run without impacting the write operations to the production DB instance.
Which solution meets these requirements?
Deploy RDS read replicas to process the business reporting queries.
Scale out the DB instance horizontally by placing it behind an Elastic Load Balancer.
Scale up the DB instance to a larger instance type to handle write operations and queries.
Deploy the DB instance in multiple Availability Zones to process the business reporting queries.
Community vote distribution
A (100%)
antropaws 4 weeks, 1 day ago
TariqKipkemei 1 month ago
Load balance read operations = read replicas
upvoted 1 times
KAUS2 3 months, 2 weeks ago
Option "A" is the right answer . Read replica use cases - You have a production database
that is taking on normal load & You want to run a reporting application to run some analytics
You create a Read Replica to run the new workload there
The production application is unaffected
Read replicas are used for SELECT (=read) only kind of statements (not INSERT, UPDATE, DELETE)
upvoted 2 times
taehyeki 3 months, 2 weeks ago
cegama543 3 months, 2 weeks ago
option A is the best solution for ensuring that business reporting queries can run without impacting write operations to the production DB instance.
upvoted 3 times
Question #390 Topic 1
A company hosts a three-tier ecommerce application on a fleet of Amazon EC2 instances. The instances run in an Auto Scaling group behind an Application Load Balancer (ALB). All ecommerce data is stored in an Amazon RDS for MariaDB Multi-AZ DB instance.
The company wants to optimize customer session management during transactions. The application must store session data durably. Which solutions will meet these requirements? (Choose two.)
Turn on the sticky sessions feature (session affinity) on the ALB.
Use an Amazon DynamoDB table to store customer session information.
Deploy an Amazon Cognito user pool to manage user session information.
Deploy an Amazon ElastiCache for Redis cluster to store customer session information.
Use AWS Systems Manager Application Manager in the application to manage user session information.
Community vote distribution
AD (54%) AB (40%) 6%
fruto123 Highly Voted 3 months, 2 weeks ago
It is A and D. Proof is in link below.
https://aws.amazon.com/caching/session-management/
upvoted 11 times
maver144 Highly Voted 2 months, 3 weeks ago
ElastiCache is cache it cannot store sessions durably
upvoted 5 times
mattcl Most Recent 1 week, 2 days ago
B and D: "The application must store session data durably" with Sticky sessions the application doesn't store anything.
upvoted 1 times
Axeashes 2 weeks, 2 days ago
An option for data persistence for ElastiCache: https://aws.amazon.com/elasticache/faqs/#:~:text=Q%3A%20Does%20Amazon%20ElastiCache%20for%20Redis%20support%20Redis%20persisten ce%3F%0AAmazon%20ElastiCache%20for%20Redis%20doesn%E2%80%99t%20support%20the%20AOF%20(Append%20Only%20File)%20feature% 20but%20you%20can%20achieve%20persistence%20by%20snapshotting%20your%20Redis%20data%20using%20the%20Backup%20and%20Resto re%20feature.%20Please%20see%20here%20for%20details.
upvoted 1 times
dpaz 3 weeks, 6 days ago
ElastiCache is not durable so session info has to be stored in DynamoDB.
upvoted 2 times
Alizade 2 months ago
A. Turn on the sticky sessions feature (session affinity) on the ALB.
D. Deploy an Amazon ElastiCache for Redis cluster to store customer session information.
upvoted 1 times
Lalo 2 months, 1 week ago
https://aws.amazon.com/es/caching/session-management/
Sticky sessions, also known as session affinity, allow you to route a site user to the particular web server that is managing that individual user’s session
In order to address scalability and to provide a shared data storage for sessions that can be accessible from any individual web server, you can abstract the HTTP sessions from the web servers themselves. A common solution to for this is to leverage an In-Memory Key/Value store such as Redis and Memcached.
upvoted 3 times
pmd2023 2 months, 1 week ago
Redis was not built to be a durable and consistent database. If you need a durable, Redis-compatible database, consider Amazon MemoryDB for Redis. Because MemoryDB uses a durable transactional log that stores data across multiple Availability Zones (AZs), you can use it as your primary database. MemoryDB is purpose-built to enable developers to use the Redis API without worrying about managing a separate cache, database, or the underlying infrastructure. https://aws.amazon.com/redis/
upvoted 1 times
kraken21 2 months, 3 weeks ago
optimize customer session management during transactions. Since the session store will be during the transaction and we have another DB for pre/post transaction storage(Maria DB).
upvoted 1 times
test_devops_aws 3 months, 1 week ago
D is incorrect but dyamodb not support mariaDB. can someone explain?
upvoted 1 times
DynamoDB here is a new DB just for the purpose of storing session data... MariaDB is for eCommerce data.
upvoted 1 times
The company wants to optimize customer session management during transactions ->
A. Turn on the sticky sessions feature (session affinity) on the ALB.
Sticky sessions for your Application Load Balancer https://docs.aws.amazon.com/elasticloadbalancing/latest/application/sticky-sessions.html
The application must "store" session data "durably" not in memory.
B. Use an Amazon DynamoDB table to store customer session information.
upvoted 4 times
kraken21 2 months, 3 weeks ago
"optimize customer session management during transactions":' During transactions' is the key here. DynamoDB will create another hop and increase latency.
upvoted 2 times
Karlos99 3 months, 2 weeks ago
The application must store session data durably : DynamoDB
upvoted 3 times
taehyeki 3 months, 2 weeks ago
bdbdbdbdbd
upvoted 2 times
care to explain?
upvoted 1 times
cegama543 3 months, 2 weeks ago
A. Turn on the sticky sessions feature (session affinity) on the ALB.
D. Deploy an Amazon ElastiCache for Redis cluster to store customer session information.
The best solution for optimizing customer session management during transactions is to turn on the sticky sessions feature (session affinity) on the ALB to ensure that each client request is routed to the same web server in the Auto Scaling group. This will ensure that the customer session is maintained throughout the transaction.
In addition, the company should deploy an Amazon ElastiCache for Redis cluster to store customer session information durably. This will ensure that the customer session information is readily available and easily accessible during a transaction.
upvoted 4 times
cegama543 3 months, 2 weeks ago
A company hosts a three-tier ecommerce application on a fleet of Amazon EC2 instances. The instances run in an Auto Scaling group behind an Application Load Balancer (ALB). All ecommerce data is stored in an Amazon RDS for MariaDB Multi-AZ DB instance.
The company wants to optimize customer session management during transactions. The application must store session data durably. Which solutions will meet these requirements? (Choose two.)
A. Turn on the sticky sessions feature (session affinity) on the ALB.
B. Use an Amazon DynamoDB table to store customer session information.
C. Deploy an Amazon Cognito user pool to manage user session information.
D. Deploy an Amazon ElastiCache for Redis cluster to store customer session information.
E. Use AWS Systems Manager Application Manager in the application to manage user session information.
upvoted 2 times
Question #391 Topic 1
A company needs a backup strategy for its three-tier stateless web application. The web application runs on Amazon EC2 instances in an Auto Scaling group with a dynamic scaling policy that is configured to respond to scaling events. The database tier runs on Amazon RDS for
PostgreSQL. The web application does not require temporary local storage on the EC2 instances. The company’s recovery point objective (RPO) is 2 hours.
The backup strategy must maximize scalability and optimize resource utilization for this environment. Which solution will meet these requirements?
Take snapshots of Amazon Elastic Block Store (Amazon EBS) volumes of the EC2 instances and database every 2 hours to meet the RPO.
Configure a snapshot lifecycle policy to take Amazon Elastic Block Store (Amazon EBS) snapshots. Enable automated backups in Amazon RDS to meet the RPO.
Retain the latest Amazon Machine Images (AMIs) of the web and application tiers. Enable automated backups in Amazon RDS and use point-in-time recovery to meet the RPO.
Take snapshots of Amazon Elastic Block Store (Amazon EBS) volumes of the EC2 instances every 2 hours. Enable automated backups in Amazon RDS and use point-in-time recovery to meet the RPO.
Community vote distribution
C (78%) B (20%)
elearningtakai Highly Voted 3 months, 1 week ago
that if there is no temporary local storage on the EC2 instances, then snapshots of EBS volumes are not necessary. Therefore, if your application does not require temporary storage on EC2 instances, using AMIs to back up the web and application tiers is sufficient to restore the system after a failure.
Snapshots of EBS volumes would be necessary if you want to back up the entire EC2 instance, including any applications and temporary data stored on the EBS volumes attached to the instances. When you take a snapshot of an EBS volume, it backs up the entire contents of that volume. This ensures that you can restore the entire EC2 instance to a specific point in time more quickly. However, if there is no temporary data stored on the EBS volumes, then snapshots of EBS volumes are not necessary.
upvoted 15 times
MssP 3 months ago
I think "temporal local storage" refers to "instance store", no instance store is required. EBS is durable storage, not temporal.
upvoted 1 times
MssP 3 months ago
Look at the first paragraph. https://repost.aws/knowledge-center/instance-store-vs-ebs
upvoted 1 times
CloudForFun Highly Voted 3 months, 2 weeks ago
The web application does not require temporary local storage on the EC2 instances => No EBS snapshot is required, retaining the latest AMI is enough.
upvoted 7 times
kruasan Most Recent 1 month, 4 weeks ago
Since the application has no local data on instances, AMIs alone can meet the RPO by restoring instances from the most recent AMI backup. When combined with automated RDS backups for the database, this provides a complete backup solution for this environment.
The other options involving EBS snapshots would be unnecessary given the stateless nature of the instances. AMIs provide all the backup needed for the app tier.
This uses native, automated AWS backup features that require minimal ongoing management:
AMI automated backups provide point-in-time recovery for the stateless app tier.
RDS automated backups provide point-in-time recovery for the database.
upvoted 2 times
neosis91 2 months, 1 week ago
BBBBBBBBBB
upvoted 1 times
Rob1L 3 months ago
CapJackSparrow 3 months, 1 week ago
nileshlg 3 months, 1 week ago
Answer is C. Keyword to notice "Stateless"
upvoted 2 times
cra2yk 3 months, 2 weeks ago
why B? I mean "stateless" and "does not require temporary local storage" have indicate that we don't need to take snapshot for ec2 volume.
upvoted 3 times
ktulu2602 3 months, 2 weeks ago
Option B is the most appropriate solution for the given requirements.
With this solution, a snapshot lifecycle policy can be created to take Amazon Elastic Block Store (Amazon EBS) snapshots periodically, which will ensure that EC2 instances can be restored in the event of an outage. Additionally, automated backups can be enabled in Amazon RDS for PostgreSQL to take frequent backups of the database tier. This will help to minimize the RPO to 2 hours.
Taking snapshots of Amazon EBS volumes of the EC2 instances and database every 2 hours (Option A) may not be cost-effective and efficient, as this approach would require taking regular backups of all the instances and volumes, regardless of whether any changes have occurred or not.
Retaining the latest Amazon Machine Images (AMIs) of the web and application tiers (Option C) would provide only an image backup and not a data backup, which is required for the database tier. Taking snapshots of Amazon EBS volumes of the EC2 instances every 2 hours and enabling automated backups in Amazon RDS and using point-in-time recovery (Option D) would result in higher costs and may not be necessary to meet the RPO requirement of 2 hours.
upvoted 4 times
cegama543 3 months, 2 weeks ago
B. Configure a snapshot lifecycle policy to take Amazon Elastic Block Store (Amazon EBS) snapshots. Enable automated backups in Amazon RDS to meet the RPO.
The best solution is to configure a snapshot lifecycle policy to take Amazon Elastic Block Store (Amazon EBS) snapshots, and enable automated backups in Amazon RDS to meet the RPO. An RPO of 2 hours means that the company needs to ensure that the backup is taken every 2 hours to minimize data loss in case of a disaster. Using a snapshot lifecycle policy to take Amazon EBS snapshots will ensure that the web and application tier can be restored quickly and efficiently in case of a disaster. Additionally, enabling automated backups in Amazon RDS will ensure that the database tier can be restored quickly and efficiently in case of a disaster. This solution maximizes scalability and optimizes resource utilization because it uses automated backup solutions built into AWS.
upvoted 3 times
Question #392 Topic 1
A company wants to deploy a new public web application on AWS. The application includes a web server tier that uses Amazon EC2 instances. The application also includes a database tier that uses an Amazon RDS for MySQL DB instance.
The application must be secure and accessible for global customers that have dynamic IP addresses. How should a solutions architect configure the security groups to meet these requirements?
A. Configure the security group for the web servers to allow inbound traffic on port 443 from 0.0.0.0/0. Configure the security group for the DB instance to allow inbound traffic on port 3306 from the security group of the web servers.
B. Configure the security group for the web servers to allow inbound traffic on port 443 from the IP addresses of the customers. Configure the security group for the DB instance to allow inbound traffic on port 3306 from the security group of the web servers.
C. Configure the security group for the web servers to allow inbound traffic on port 443 from the IP addresses of the customers. Configure the security group for the DB instance to allow inbound traffic on port 3306 from the IP addresses of the customers.
D. Configure the security group for the web servers to allow inbound traffic on port 443 from 0.0.0.0/0. Configure the security group for the DB instance to allow inbound traffic on port 3306 from 0.0.0.0/0.
Community vote distribution
A (77%) B (23%)
jayce5 3 weeks, 3 days ago
Should be A since the customer IPs are dynamically.
upvoted 1 times
antropaws 4 weeks, 1 day ago
omoakin 1 month ago
BBBBBBBBBBBBBBBBBBBBBB
from customers IPs
upvoted 1 times
MostafaWardany 2 weeks, 3 days ago
Correct answer A, customer dynamic IPs ==>> 443 from 0.0.0.0/0
upvoted 1 times
TariqKipkemei 1 month ago
dynamic source ips = allow all traffic - Configure the security group for the web servers to allow inbound traffic on port 443 from 0.0.0.0/0. Configure the security group for the DB instance to allow inbound traffic on port 3306 from the security group of the web servers.
upvoted 2 times
elearningtakai 3 months ago
If the customers have dynamic IP addresses, option A would be the most appropriate solution for allowing global access while maintaining security.
upvoted 3 times
Kenzo 3 months ago
Correct answer is A. B and C are out.
D is out because it is accepting traffic from every where instead of from webservers only
upvoted 3 times
Grace83 3 months, 1 week ago
A is correct
upvoted 3 times
WherecanIstart 3 months, 1 week ago
Keyword dynamic ...A is the right answer. If the IP were static and specific, B would be the right answer
upvoted 3 times
boxu03 3 months, 2 weeks ago
kprakashbehera 3 months, 2 weeks ago
Ans - A
upvoted 1 times
taehyeki 3 months, 2 weeks ago
aaaaaa
upvoted 1 times
Question #393 Topic 1
A payment processing company records all voice communication with its customers and stores the audio files in an Amazon S3 bucket. The company needs to capture the text from the audio files. The company must remove from the text any personally identifiable information (PII) that belongs to customers.
What should a solutions architect do to meet these requirements?
A. Process the audio files by using Amazon Kinesis Video Streams. Use an AWS Lambda function to scan for known PII patterns.
B. When an audio file is uploaded to the S3 bucket, invoke an AWS Lambda function to start an Amazon Textract task to analyze the call recordings.
C. Configure an Amazon Transcribe transcription job with PII redaction turned on. When an audio file is uploaded to the S3 bucket, invoke an AWS Lambda function to start the transcription job. Store the output in a separate S3 bucket.
D. Create an Amazon Connect contact flow that ingests the audio files with transcription turned on. Embed an AWS Lambda function to scan for known PII patterns. Use Amazon EventBridge to start the contact flow when an audio file is uploaded to the S3 bucket.
Community vote distribution
C (100%)
SimiTik 2 months ago
C
Amazon Transcribe is a service provided by Amazon Web Services (AWS) that converts speech to text using automatic speech recognition (ASR) technology. gtp
upvoted 2 times
elearningtakai 2 months, 4 weeks ago
Option C is the most suitable solution as it suggests using Amazon Transcribe with PII redaction turned on. When an audio file is uploaded to the S3 bucket, an AWS Lambda function can be used to start the transcription job. The output can be stored in a separate S3 bucket to ensure that the PII redaction is applied to the transcript. Amazon Transcribe can redact PII such as credit card numbers, social security numbers, and phone numbers.
upvoted 3 times
WherecanIstart 3 months, 1 week ago
WherecanIstart 3 months, 1 week ago
C for sure
upvoted 1 times
boxu03 3 months, 2 weeks ago
Ruhi02 3 months, 2 weeks ago
answer c
upvoted 1 times
KAUS2 3 months, 2 weeks ago
Question #394 Topic 1
A company is running a multi-tier ecommerce web application in the AWS Cloud. The application runs on Amazon EC2 instances with an Amazon RDS for MySQL Multi-AZ DB instance. Amazon RDS is configured with the latest generation DB instance with 2,000 GB of storage in a General
Purpose SSD (gp3) Amazon Elastic Block Store (Amazon EBS) volume. The database performance affects the application during periods of high demand.
A database administrator analyzes the logs in Amazon CloudWatch Logs and discovers that the application performance always degrades when the number of read and write IOPS is higher than 20,000.
What should a solutions architect do to improve the application performance?
A. Replace the volume with a magnetic volume.
B. Increase the number of IOPS on the gp3 volume.
C. Replace the volume with a Provisioned IOPS SSD (io2) volume.
D. Replace the 2,000 GB gp3 volume with two 1,000 GB gp3 volumes.
Community vote distribution
B (49%) D (33%) C (18%)
Bezha Highly Voted 3 months, 1 week ago
A - Magnetic Max IOPS 200 - Wrong
B - gp3 Max IOPS 16000 per volume - Wrong
C - RDS not supported io2 - Wrong
D - Correct; 2 gp3 volume with 16 000 each 2*16000 = 32 000 IOPS
upvoted 16 times
joechen2023 1 week, 4 days ago
https://repost.aws/knowledge-center/ebs-volume-type-differences RDS does support io2
upvoted 1 times
wRhlH 6 days, 17 hours ago
that Link is to EBS instead of RDS
upvoted 1 times
Michal_L_95 Highly Voted 3 months, 2 weeks ago
It can not be option C as RDS does not support io2 storage type (only io1).
Here is a link to the RDS storage documentation: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html Also it is not the best option to take Magnetic storage as it supports max 1000 IOPS.
I vote for option B as gp3 storage type supports up to 64 000 IOPS where question mentioned with problem at level of 20 000.
upvoted 7 times
joechen2023 1 week, 4 days ago
check the link below https://repost.aws/knowledge-center/ebs-volume-type-differences it states:
General Purpose SSD volumes are good for a wide variety of transactional workloads that require less than the following:
16,000 IOPS
1,000 MiB/s of throughput 160-TiB volume size
upvoted 1 times
GalileoEC2 3 months ago
is this true? Amazon RDS (Relational Database Service) supports the Provisioned IOPS SSD (io2) storage type for its database instances. The io2 storage type is designed to deliver predictable performance for critical and highly demanding database workloads. It provides higher durability, higher IOPS, and lower latency compared to other Amazon EBS (Elastic Block Store) storage types. RDS offers the option to choose between the General Purpose SSD (gp3) and Provisioned IOPS SSD (io2) storage types for database instances.
upvoted 1 times
samehpalass Most Recent 1 week ago
B-icrease GP3 IOPS
DB storage size for gp3 above 400 G support up to 64,000 IOPS, please check the below link: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html
upvoted 1 times
Answer B: For RDS Mysql -> 12,000–64,000 IOPS
upvoted 1 times
B- RDS gp3 max iops 64000
C- RDS have only io1 disk type
D- RDS not have menu for seperate EBS disk. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html
upvoted 1 times
gp3 Support flexible IOPS , tested 13th June 2023
upvoted 1 times
Answer C (from abylead)
EBS is a low latency block storage, network attached to EC2 insts with single or multi-attach vols. like physical local disk drives. Prov IOPS vols, backed by SSDs, are the highest perf EBS storage vols designed for your critical, IOPS-& throughput-intensive workloads that require low latency. Prov. IOPS SSD vols use a consistent IOPS rate, which you specify when you create the vol, & EBS deliversion prov perf 99.9% of the time.
EBS perf: https://aws.amazon.com/ebs/features/
Less correct & incorrect (infeasible & inadequate) answers:
magnetic vol. worsens perf: inadequate.
increase number of IOPS on the gp3 vol is limited: infeasible.
D)replace 2kGB vol with 2x 1kGB gp3 vols is limited: infeasible.
upvoted 1 times
CCCCCCCCCCCCCC
upvoted 1 times
I just tried this from the console (on 24 May 2023) and.. B is the answer, simply increase the IOPS of the SSD gp3.
Provisioned IOPS SSD (io2) is not supported by RDS but SSD (io1) is supported.
Option D does not mention anything on the IOPS
upvoted 2 times
{C} - io2 is not supported
{D} - ??
{B} - definitely feasible
Amazon RDS gp3 volumes give you the flexibility to provision storage performance independently of storage capacity, paying only for the resources you need. Every gp3 volume provides you the ability to select from 20 GiB to 64 TiB of storage capacity, with a baseline storage performance of 3,000 IOPS included with the price of storage. For workloads that need even more performance, you can scale up to 64,000 IOPS for an additional cost.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html
upvoted 3 times
Disk IO causes performance problems, so you need to replace it with a better performance disk. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volumetypes.html
upvoted 1 times
Gp3 can be up to 64.000 IOPS
upvoted 2 times
CCCCCCCCCC
upvoted 1 times
neosis91 2 months, 1 week ago
elearningtakai 3 months ago
RDS currently is not support io2 GP3 up to 64,000 IOPS
https://aws.amazon.com/about-aws/whats-new/2022/11/amazon-rds-general-purpose-gp3-storage-volumes/
upvoted 4 times
volkan4242 3 months ago
C
Based on the scenario described, the best solution to improve the application performance would be to replace the 2,000 GB gp3 volume with a Provisioned IOPS SSD (io2) volume.
Explanation:
The performance degradation observed during periods of high demand is likely due to the database hitting the IOPS limit of the gp3 volume. While increasing the number of IOPS on the gp3 volume is an option, it may not be enough to handle the expected load and could also increase costs.
Using a Provisioned IOPS SSD (io2) volume would provide consistent and high-performance storage for the database. It allows the database administrator to specify the number of IOPS and throughput needed for the database, and the storage is automatically replicated in multiple Availability Zones for high availability.
Replacing the volume with a magnetic volume or splitting the volume into two 1,000 GB gp3 volumes would not provide the required level of performance and may also introduce additional complexity and management overhead.
upvoted 1 times
klayytech 3 months ago
To improve the application performance, you can replace the 2,000 GB gp3 volume with two 1,000 GB gp3 volumes. This will increase the number of IOPS available to the database and improve performance.
upvoted 1 times
Question #395 Topic 1
An IAM user made several configuration changes to AWS resources in their company's account during a production deployment last week. A
solutions architect learned that a couple of security group rules are not configured as desired. The solutions architect wants to confirm which IAM user was responsible for making changes.
Which service should the solutions architect use to find the desired information?
A. Amazon GuardDuty
B. Amazon Inspector
C. AWS CloudTrail
D. AWS Config
Community vote distribution
C (100%)
cegama543 Highly Voted 3 months, 2 weeks ago
C. AWS CloudTrail
The best option is to use AWS CloudTrail to find the desired information. AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of AWS account activities. CloudTrail can be used to log all changes made to resources in an AWS account, including changes made by IAM users, EC2 instances, AWS management console, and other AWS services. By using CloudTrail, the solutions architect can identify the IAM user who made the configuration changes to the security group rules.
upvoted 6 times
TariqKipkemei Most Recent 1 month ago
Bezha 3 months, 1 week ago
dcp 3 months, 2 weeks ago
C. AWS CloudTrail
upvoted 2 times
kprakashbehera 3 months, 2 weeks ago
CloudTrail logs will tell who did that
upvoted 2 times
KAUS2 3 months, 2 weeks ago
Option "C" AWS CloudTrail is correct.
upvoted 2 times
Nithin1119 3 months, 2 weeks ago
cccccc
upvoted 2 times
Question #396 Topic 1
A company has implemented a self-managed DNS service on AWS. The solution consists of the following:
Amazon EC2 instances in different AWS Regions
Endpoints of a standard accelerator in AWS Global Accelerator The company wants to protect the solution against DDoS attacks. What should a solutions architect do to meet this requirement?
Subscribe to AWS Shield Advanced. Add the accelerator as a resource to protect.
Subscribe to AWS Shield Advanced. Add the EC2 instances as resources to protect.
Create an AWS WAF web ACL that includes a rate-based rule. Associate the web ACL with the accelerator.
Create an AWS WAF web ACL that includes a rate-based rule. Associate the web ACL with the EC2 instances.
Community vote distribution
A (100%)
Abrar2022 2 weeks, 2 days ago
DDoS attacks = AWS Shield Advance resource as Global Acc
upvoted 1 times
TariqKipkemei 1 month ago
DDoS attacks = AWS Shield Advanced
upvoted 2 times
WherecanIstart 3 months, 1 week ago
DDoS attacks = AWS Shield Advance
Shield Advance protects Global Accelerator, NLB, ALB, etc
upvoted 4 times
nileshlg 3 months, 1 week ago
Answer is A
https://docs.aws.amazon.com/waf/latest/developerguide/ddos-event-mitigation-logic-gax.html
upvoted 1 times
ktulu2602 3 months, 2 weeks ago
AWS Shield is a managed service that provides protection against Distributed Denial of Service (DDoS) attacks for applications running on AWS. AWS Shield Standard is automatically enabled to all AWS customers at no additional cost. AWS Shield Advanced is an optional paid service. AWS Shield Advanced provides additional protections against more sophisticated and larger attacks for your applications running on Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, and Route 53.
upvoted 2 times
taehyeki 3 months, 2 weeks ago
aaaaa
accelator can not be attached to shield
upvoted 1 times
ktulu2602 3 months, 2 weeks ago
Yes it can:
AWS Shield is a managed service that provides protection against Distributed Denial of Service (DDoS) attacks for applications running on AWS. AWS Shield Standard is automatically enabled to all AWS customers at no additional cost. AWS Shield Advanced is an optional paid service. AWS Shield Advanced provides additional protections against more sophisticated and larger attacks for your applications running on Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, and Route 53.
upvoted 1 times
taehyeki 3 months, 2 weeks ago
bbbbbbbbb
upvoted 1 times
enzomv 3 months, 2 weeks ago
Your origin servers can be Amazon Simple Storage Service (S3), Amazon EC2, Elastic Load Balancing, or a custom server outside of AWS. You can also enable AWS Shield Advanced directly on Elastic Load Balancing or Amazon EC2 in the following AWS Regions - Northern Virginia, Ohio, Oregon, Northern California, Montreal, São Paulo, Ireland, Frankfurt, London, Paris, Stockholm, Singapore, Tokyo, Sydney, Seoul, Mumbai, Milan, and Cape Town.
My answer is B
upvoted 1 times
enzomv 3 months, 2 weeks ago
https://docs.aws.amazon.com/waf/latest/developerguide/ddos-event-mitigation-logic-gax.html
Sorry I meant A
upvoted 1 times
Question #397 Topic 1
An ecommerce company needs to run a scheduled daily job to aggregate and filter sales records for analytics. The company stores the sales
records in an Amazon S3 bucket. Each object can be up to 10 GB in size. Based on the number of sales events, the job can take up to an hour to complete. The CPU and memory usage of the job are constant and are known in advance.
A solutions architect needs to minimize the amount of operational effort that is needed for the job to run. Which solution meets these requirements?
Create an AWS Lambda function that has an Amazon EventBridge notification. Schedule the EventBridge event to run once a day.
Create an AWS Lambda function. Create an Amazon API Gateway HTTP API, and integrate the API with the function. Create an Amazon EventBridge scheduled event that calls the API and invokes the function.
Create an Amazon Elastic Container Service (Amazon ECS) cluster with an AWS Fargate launch type. Create an Amazon EventBridge scheduled event that launches an ECS task on the cluster to run the job.
Create an Amazon Elastic Container Service (Amazon ECS) cluster with an Amazon EC2 launch type and an Auto Scaling group with at least one EC2 instance. Create an Amazon EventBridge scheduled event that launches an ECS task on the cluster to run the job.
Community vote distribution
C (100%)
ktulu2602 Highly Voted 3 months, 2 weeks ago
The requirement is to run a daily scheduled job to aggregate and filter sales records for analytics in the most efficient way possible. Based on the requirement, we can eliminate option A and B since they use AWS Lambda which has a limit of 15 minutes of execution time, which may not be sufficient for a job that can take up to an hour to complete.
Between options C and D, option C is the better choice since it uses AWS Fargate which is a serverless compute engine for containers that eliminates the need to manage the underlying EC2 instances, making it a low operational effort solution. Additionally, Fargate also provides instant scale-up and scale-down capabilities to run the scheduled job as per the requirement.
Therefore, the correct answer is:
C. Create an Amazon Elastic Container Service (Amazon ECS) cluster with an AWS Fargate launch type. Create an Amazon EventBridge scheduled event that launches an ECS task on the cluster to run the job.
upvoted 13 times
TariqKipkemei Most Recent 1 month ago
The best option is C.
'The job can take up to an hour to complete' rules out lambda functions as they only execute up to 15 mins. Hence option A and B are out. 'The CPU and memory usage of the job are constant and are known in advance' rules out the need for autoscaling. Hence option D is out.
upvoted 2 times
imvb88 2 months, 1 week ago
"1-hour job" -> A, B out since max duration for Lambda is 15 min
Between C and D, "minimize operational effort" means Fargate -> C
upvoted 3 times
klayytech 3 months ago
The solution that meets the requirements with the least operational overhead is to create a **Regional AWS WAF web ACL with a rate-based rule** and associate the web ACL with the API Gateway stage. This solution will protect the application from HTTP flood attacks by monitoring incoming requests and blocking requests from IP addresses that exceed the predefined rate.
Amazon CloudFront distribution with Lambda@Edge in front of the API Gateway Regional API endpoint is also a good solution but it requires more operational overhead than the previous solution.
Using Amazon CloudWatch metrics to monitor the Count metric and alerting the security team when the predefined rate is reached is not a solution that can protect against HTTP flood attacks.
Creating an Amazon CloudFront distribution in front of the API Gateway Regional API endpoint with a maximum TTL of 24 hours is not a solution that can protect against HTTP flood attacks.
upvoted 1 times
klayytech 3 months ago
The solution that meets these requirements is C. Create an Amazon Elastic Container Service (Amazon ECS) cluster with an AWS Fargate launch type. Create an Amazon EventBridge scheduled event that launches an ECS task on the cluster to run the job. This solution will minimize the amount of operational effort that is needed for the job to run.
AWS Lambda which has a limit of 15 minutes of execution time,
upvoted 1 times
Question #398 Topic 1
A company needs to transfer 600 TB of data from its on-premises network-attached storage (NAS) system to the AWS Cloud. The data transfer must be complete within 2 weeks. The data is sensitive and must be encrypted in transit. The company’s internet connection can support an
upload speed of 100 Mbps.
Which solution meets these requirements MOST cost-effectively?
A. Use Amazon S3 multi-part upload functionality to transfer the files over HTTPS.
B. Create a VPN connection between the on-premises NAS system and the nearest AWS Region. Transfer the data over the VPN connection.
C. Use the AWS Snow Family console to order several AWS Snowball Edge Storage Optimized devices. Use the devices to transfer the data to Amazon S3.
D. Set up a 10 Gbps AWS Direct Connect connection between the company location and the nearest AWS Region. Transfer the data over a VPN connection into the Region to store the data in Amazon S3.
Community vote distribution
C (100%)
shanwford Highly Voted 2 months, 2 weeks ago
With the existing data link the transfer takes ~ 600 days in the best case. Thus, (A) and (B) are not applicable. Solution (D) could meet the target with a transfer time of 6 days, but the lead time for the direct connect deployment can take weeks! Thus, (C) is the only valid solution.
upvoted 5 times
TariqKipkemei Most Recent 1 month ago
C is the best option considering the time and bandwidth limitations
upvoted 1 times
pbpally 1 month, 2 weeks ago
We need the admin in here to tell us how they plan on this being achieved over connection with such a slow connection lol. It's C, folks.
upvoted 2 times
KAUS2 3 months, 2 weeks ago
Best option is to use multiple AWS Snowball Edge Storage Optimized devices. Option "C" is the correct one.
upvoted 1 times
ktulu2602 3 months, 2 weeks ago
All others are limited by the bandwidth limit
upvoted 1 times
ktulu2602 3 months, 2 weeks ago Or provisioning time in the D case upvoted 1 times
KZM 3 months, 2 weeks ago
It is C. Snowball (from Snow Family).
upvoted 1 times
cegama543 3 months, 2 weeks ago
C. Use the AWS Snow Family console to order several AWS Snowball Edge Storage Optimized devices. Use the devices to transfer the data to Amazon S3.
The best option is to use the AWS Snow Family console to order several AWS Snowball Edge Storage Optimized devices and use the devices to transfer the data to Amazon S3. Snowball Edge is a petabyte-scale data transfer device that can help transfer large amounts of data securely and quickly. Using Snowball Edge can be the most cost-effective solution for transferring large amounts of data over long distances and can help meet the requirement of transferring 600 TB of data within two weeks.
upvoted 3 times
Question #399 Topic 1
A financial company hosts a web application on AWS. The application uses an Amazon API Gateway Regional API endpoint to give users the
ability to retrieve current stock prices. The company’s security team has noticed an increase in the number of API requests. The security team is concerned that HTTP flood attacks might take the application offline.
A solutions architect must design a solution to protect the application from this type of attack. Which solution meets these requirements with the LEAST operational overhead?
A. Create an Amazon CloudFront distribution in front of the API Gateway Regional API endpoint with a maximum TTL of 24 hours.
B. Create a Regional AWS WAF web ACL with a rate-based rule. Associate the web ACL with the API Gateway stage.
C. Use Amazon CloudWatch metrics to monitor the Count metric and alert the security team when the predefined rate is reached.
D. Create an Amazon CloudFront distribution with Lambda@Edge in front of the API Gateway Regional API endpoint. Create an AWS Lambda function to block requests from IP addresses that exceed the predefined rate.
Community vote distribution
B (100%)
TariqKipkemei 1 month ago
maxicalypse 2 months, 3 weeks ago
B os correct
upvoted 1 times
elearningtakai 2 months, 4 weeks ago
A rate-based rule in AWS WAF allows the security team to configure thresholds that trigger rate-based rules, which enable AWS WAF to track the rate of requests for a specified time period and then block them automatically when the threshold is exceeded. This provides the ability to prevent HTTP flood attacks with minimal operational overhead.
upvoted 2 times
kampatra 3 months, 1 week ago
taehyeki 3 months, 2 weeks ago
Question #400 Topic 1
A meteorological startup company has a custom web application to sell weather data to its users online. The company uses Amazon DynamoDB to store its data and wants to build a new service that sends an alert to the managers of four internal teams every time a new weather event is recorded. The company does not want this new service to affect the performance of the current application.
What should a solutions architect do to meet these requirements with the LEAST amount of operational overhead?
A. Use DynamoDB transactions to write new event data to the table. Configure the transactions to notify internal teams.
B. Have the current application publish a message to four Amazon Simple Notification Service (Amazon SNS) topics. Have each team subscribe to one topic.
C. Enable Amazon DynamoDB Streams on the table. Use triggers to write to a single Amazon Simple Notification Service (Amazon SNS) topic to which the teams can subscribe.
D. Add a custom attribute to each record to flag new items. Write a cron job that scans the table every minute for items that are new and notifies an Amazon Simple Queue Service (Amazon SQS) queue to which the teams can subscribe.
Community vote distribution
C (100%)
TariqKipkemei 1 month ago
Buruguduystunstugudunstuy 3 months ago
The best solution to meet these requirements with the least amount of operational overhead is to enable Amazon DynamoDB Streams on the table and use triggers to write to a single Amazon Simple Notification Service (Amazon SNS) topic to which the teams can subscribe. This solution requires minimal configuration and infrastructure setup, and Amazon DynamoDB Streams provide a low-latency way to capture changes to the DynamoDB table. The triggers automatically capture the changes and publish them to the SNS topic, which notifies the internal teams.
upvoted 3 times
Buruguduystunstugudunstuy 3 months ago
Answer A is not a suitable solution because it requires additional configuration to notify the internal teams, and it could add operational overhead to the application.
Answer B is not the best solution because it requires changes to the current application, which may affect its performance, and it creates additional work for the teams to subscribe to multiple topics.
Answer D is not a good solution because it requires a cron job to scan the table every minute, which adds additional operational overhead to the system.
Therefore, the correct answer is C. Enable Amazon DynamoDB Streams on the table. Use triggers to write to a single Amazon SNS topic to which the teams can subscribe.
upvoted 2 times
Hemanthgowda1932 3 months ago
C is correct
upvoted 1 times
Santosh43 3 months, 1 week ago
definitely C
upvoted 1 times
Bezha 3 months, 1 week ago
sitha 3 months, 2 weeks ago
taehyeki 3 months, 2 weeks ago
Question #401 Topic 1
A company wants to use the AWS Cloud to make an existing application highly available and resilient. The current version of the application
resides in the company's data center. The application recently experienced data loss after a database server crashed because of an unexpected power outage.
The company needs a solution that avoids any single points of failure. The solution must give the application the ability to scale to meet user demand.
Which solution will meet these requirements?
A. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones. Use an Amazon RDS DB instance in a Multi-AZ configuration.
B. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group in a single Availability Zone. Deploy the database on an EC2 instance. Enable EC2 Auto Recovery.
C. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones. Use an Amazon RDS DB instance with a read replica in a single Availability Zone. Promote the read replica to replace the primary DB instance if the primary DB instance fails.
D. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones. Deploy the primary and secondary database servers on EC2 instances across multiple Availability Zones. Use Amazon Elastic Block Store (Amazon EBS) Multi-Attach to create shared storage between the instances.
Community vote distribution
A (86%) 7%
antropaws 4 weeks, 1 day ago
TariqKipkemei 1 month ago
Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones. Use an Amazon RDS DB instance in a Multi-AZ configuration.
upvoted 2 times
Buruguduystunstugudunstuy 3 months ago
The correct answer is A. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones. Use an Amazon RDS DB instance in a Multi-AZ configuration.
To make an existing application highly available and resilient while avoiding any single points of failure and giving the application the ability to scale to meet user demand, the best solution would be to deploy the application servers using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones and use an Amazon RDS DB instance in a Multi-AZ configuration.
By using an Amazon RDS DB instance in a Multi-AZ configuration, the database is automatically replicated across multiple Availability Zones, ensuring that the database is highly available and can withstand the failure of a single Availability Zone. This provides fault tolerance and avoids any single points of failure.
upvoted 2 times
Thief 3 months, 1 week ago
Buruguduystunstugudunstuy 3 months ago
Answer D, deploying the primary and secondary database servers on EC2 instances across multiple Availability Zones and using Amazon Elastic Block Store (Amazon EBS) Multi-Attach to create shared storage between the instances, may provide high availability for the database but may introduce additional complexity, and management overhead, and potential performance issues.
upvoted 1 times
WherecanIstart 3 months, 1 week ago
Highly available = Multi-AZ approach
upvoted 2 times
nileshlg 3 months, 1 week ago
dcp 3 months, 2 weeks ago
Option A is the correct solution. Deploying the application servers in an Auto Scaling group across multiple Availability Zones (AZs) ensures high availability and fault tolerance. An Auto Scaling group allows the application to scale horizontally to meet user demand. Using Amazon RDS DB instance in a Multi-AZ configuration ensures that the database is automatically replicated to a standby instance in a different AZ. This provides database redundancy and avoids any single point of failure.
upvoted 1 times
quentin17 3 months, 2 weeks ago
KAUS2 3 months, 2 weeks ago
cegama543 3 months, 2 weeks ago
Question #402 Topic 1
A company needs to ingest and handle large amounts of streaming data that its application generates. The application runs on Amazon EC2
instances and sends data to Amazon Kinesis Data Streams, which is configured with default settings. Every other day, the application consumes the data and writes the data to an Amazon S3 bucket for business intelligence (BI) processing. The company observes that Amazon S3 is not
receiving all the data that the application sends to Kinesis Data Streams. What should a solutions architect do to resolve this issue?
A. Update the Kinesis Data Streams default settings by modifying the data retention period.
B. Update the application to use the Kinesis Producer Library (KPL) to send the data to Kinesis Data Streams.
C. Update the number of Kinesis shards to handle the throughput of the data that is sent to Kinesis Data Streams.
D. Turn on S3 Versioning within the S3 bucket to preserve every version of every object that is ingested in the S3 bucket.
Community vote distribution
A (51%) C (44%) 5%
cegama543 Highly Voted 3 months, 2 weeks ago
C. Update the number of Kinesis shards to handle the throughput of the data that is sent to Kinesis Data Streams.
The best option is to update the number of Kinesis shards to handle the throughput of the data that is sent to Kinesis Data Streams. Kinesis Data Streams scales horizontally by increasing or decreasing the number of shards, which controls the throughput capacity of the stream. By increasing the number of shards, the application will be able to send more data to Kinesis Data Streams, which can help ensure that S3 receives all the data.
upvoted 11 times
CapJackSparrow 3 months, 1 week ago
lets say you had infinity shards... if the retention period is 24 hours and you get the data every 48 hours, you will lose 24 hours of data no matter the amount of shards no?
upvoted 6 times
enzomv 3 months, 1 week ago
Amazon Kinesis Data Streams supports changes to the data record retention period of your data stream. A Kinesis data stream is an ordered sequence of data records meant to be written to and read from in real time. Data records are therefore stored in shards in your stream temporarily. The time period from when a record is added to when it is no longer accessible is called the retention period. A Kinesis data stream stores records from 24 hours by default, up to 8760 hours (365 days).
upvoted 4 times
Buruguduystunstugudunstuy 3 months ago
Answer C:
C. Update the number of Kinesis shards to handle the throughput of the data that is sent to Kinesis Data Streams.
Answer C updates the number of Kinesis shards to handle the throughput of the data that is sent to Kinesis Data Streams. By increasing the number of shards, the data is distributed across multiple shards, which allows for increased throughput and ensures that all data is ingested and processed by Kinesis Data Streams.
Monitoring the Kinesis Data Streams and adjusting the number of shards as needed to handle changes in data throughput can ensure that the application can handle large amounts of streaming data.
upvoted 2 times
Buruguduystunstugudunstuy 3 months ago
@cegama543, my apologies. Moderator if you can disapprove of the post above? I made a mistake. It is supposed to be intended on the post that I submitted.
Thanks.
upvoted 1 times
WherecanIstart Highly Voted 3 months, 1 week ago
"A Kinesis data stream stores records from 24 hours by default, up to 8760 hours (365 days)." https://docs.aws.amazon.com/streams/latest/dev/kinesis-extended-retention.html
The question mentioned Kinesis data stream default settings and "every other day". After 24hrs, the data isn't in the Data stream if the default settings is not modified to store data more than 24hrs.
upvoted 9 times
jayce5 Most Recent 3 weeks, 4 days ago
C is wrong because even if you update the number of Kinesis shards, you still need to change the default data retention period first. Otherwise, you would lose data after 24 hours.
upvoted 1 times
A is unrelated to the issue. The correct answer is C.
upvoted 1 times
omoakin 1 month ago Correct Ans. is B
upvoted 1 times
By default, a Kinesis data stream is created with one shard. If the data throughput to the stream is higher than the capacity of the single shard, the data stream may not be able to handle all the incoming data, and some data may be lost.
Therefore, to handle the high volume of data that the application sends to Kinesis Data Streams, the number of Kinesis shards should be increased to handle the required throughput
upvoted 2 times
both Option A and Option C could be valid solutions to resolving the issue of data loss, depending on the root cause of the problem. It would be best to analyze the root cause of the data loss issue to determine which solution is most appropriate for this specific scenario.
upvoted 1 times
CCCCCCCCC
upvoted 2 times
kraken21 2 months, 3 weeks ago
Also: https://www.examtopics.com/discussions/amazon/view/61067-exam-aws-certified-solutions-architect-associate-saa-c02/ for Option A.
upvoted 1 times
kraken21 2 months, 3 weeks ago
It comes down to is it a compute issue or a storage issue. Since the keywords of "Default", "every other day" were used and the issue is some data is missing, I am voting for Option A.
upvoted 5 times
ChapGPT gives answer B or C. also mention that Option A and option D are not directly related to the issue of data loss and may not help to resolve the problem.
upvoted 3 times
Buruguduystunstugudunstuy 3 months ago
A comparison of Answer A and Answer C:
Answer A:
A. Update the Kinesis Data Streams default settings by modifying the data retention period.
Answer A modifies the data retention period of Kinesis Data Streams, which defines how long the data is retained in the stream. Increasing the retention period may ensure that all data is eventually ingested and processed by Kinesis Data Streams, but it does not address the immediate issue of data not being ingested by Kinesis Data Streams.
Modifying the data retention period may also lead to increased storage costs if the data is retained for a longer period of time.
upvoted 2 times
Buruguduystunstugudunstuy 3 months ago
Answer C:
C. Update the number of Kinesis shards to handle the throughput of the data that is sent to Kinesis Data Streams.
Answer C updates the number of Kinesis shards to handle the throughput of the data that is sent to Kinesis Data Streams. By increasing the number of shards, the data is distributed across multiple shards, which allows for increased throughput and ensures that all data is ingested and processed by Kinesis Data Streams.
Monitoring the Kinesis Data Streams and adjusting the number of shards as needed to handle changes in data throughput can ensure that the application can handle large amounts of streaming data.
upvoted 2 times
Buruguduystunstugudunstuy 3 months ago
In comparison, while both options can help address the issue of data not being ingested by Kinesis Data Streams, Answer C is a more direct solution that addresses the underlying issue of insufficient capacity to handle the data throughput. Answer A may delay the issue of
incomplete data ingestion by increasing the retention period, but it does not address the root cause of the problem.
In conclusion, Answer C is a more effective solution for handling large amounts of streaming data and ensuring that all data is ingested and processed by Kinesis Data Streams.
upvoted 2 times
ruqui 4 weeks, 1 day ago
your analysis is wrong ... the real problem is that the application ingested the data every 48 hours, if Kinesis holds only the latest 24 hours, then all data that is ingested by Kinesis in hours 0 to 23 are not present (it will only have the data from hours 24 to 48) . For this reason, C is completely wrong, there's no way that having an infinite number of shards would be able to process data that is already gone
upvoted 1 times
Grace83 3 months, 1 week ago
A is the correct answer
upvoted 1 times
nileshlg 3 months, 1 week ago
Correct answer is A. Keywords to consider are,
Default Parameters
Every Other Day
upvoted 5 times
dcp 3 months, 2 weeks ago
C. Update the number of Kinesis shards to handle the throughput of the data that is sent to Kinesis Data Streams.
The issue is that the Amazon S3 bucket is not receiving all the data sent to Kinesis Data Streams. This indicates that the bottleneck is most likely in the Kinesis Data Streams configuration.
To resolve this issue, a solutions architect should increase the number of Kinesis shards. Kinesis Data Streams partitions data into shards, and each shard can handle a specific amount of data throughput. By default, Kinesis Data Streams is configured with a single shard, which may not be enough to handle the application's data throughput.
Increasing the number of shards will distribute the data more evenly and improve the throughput, allowing all the data to be processed and sent to Amazon S3 for further analysis.
upvoted 4 times
kampatra 3 months, 2 weeks ago
Need to increase default retention period
upvoted 2 times
UnluckyDucky 3 months, 2 weeks ago
By default, Kinesis Data Streams hold your data for 24 hours. Everything that is 24 hours and 1 second old gets deleted unless the retention policy is changed.
Key words: Every other day and default settings for that Kinesis streams.
upvoted 4 times
Question #403 Topic 1
A developer has an application that uses an AWS Lambda function to upload files to Amazon S3 and needs the required permissions to perform the task. The developer already has an IAM user with valid IAM credentials required for Amazon S3.
What should a solutions architect do to grant the permissions?
A. Add required IAM permissions in the resource policy of the Lambda function.
B. Create a signed request using the existing IAM credentials in the Lambda function.
C. Create a new IAM user and use the existing IAM credentials in the Lambda function.
D. Create an IAM execution role with the required permissions and attach the IAM role to the Lambda function.
Community vote distribution
D (100%)
Buruguduystunstugudunstuy 3 months ago
To grant the necessary permissions to an AWS Lambda function to upload files to Amazon S3, a solutions architect should create an IAM execution role with the required permissions and attach the IAM role to the Lambda function. This approach follows the principle of least privilege and ensures that the Lambda function can only access the resources it needs to perform its specific task.
Therefore, the correct answer is D. Create an IAM execution role with the required permissions and attach the IAM role to the Lambda function.
upvoted 1 times
Bilalglg93350 3 months, 1 week ago
D. Créez un rôle d'exécution IAM avec les autorisations requises et attachez le rôle IAM à la fonction Lambda.
L'architecte de solutions doit créer un rôle d'exécution IAM ayant les autorisations nécessaires pour accéder à Amazon S3 et effectuer les opérations requises (par exemple, charger des fichiers). Ensuite, le rôle doit être associé à la fonction Lambda, de sorte que la fonction puisse assumer ce rôle et avoir les autorisations nécessaires pour interagir avec Amazon S3.
upvoted 2 times
nileshlg 3 months, 1 week ago
kampatra 3 months, 2 weeks ago
sitha 3 months, 2 weeks ago
Create Lambda execution role and attach existing S3 IAM role to the lambda function
upvoted 1 times
ktulu2602 3 months, 2 weeks ago
Nithin1119 3 months, 2 weeks ago
taehyeki 3 months, 2 weeks ago
Question #404 Topic 1
A company has deployed a serverless application that invokes an AWS Lambda function when new documents are uploaded to an Amazon S3 bucket. The application uses the Lambda function to process the documents. After a recent marketing campaign, the company noticed that the application did not process many of the documents.
What should a solutions architect do to improve the architecture of this application?
A. Set the Lambda function's runtime timeout value to 15 minutes.
B. Configure an S3 bucket replication policy. Stage the documents in the S3 bucket for later processing.
C. Deploy an additional Lambda function. Load balance the processing of the documents across the two Lambda functions.
D. Create an Amazon Simple Queue Service (Amazon SQS) queue. Send the requests to the queue. Configure the queue as an event source for Lambda.
Community vote distribution
D (100%)
TariqKipkemei 1 month ago
D is the best approach
upvoted 1 times
Russs99 3 months ago
D is the correct answer
upvoted 1 times
Buruguduystunstugudunstuy 3 months ago
To improve the architecture of this application, the best solution would be to use Amazon Simple Queue Service (Amazon SQS) to buffer the requests and decouple the S3 bucket from the Lambda function. This will ensure that the documents are not lost and can be processed at a later time if the Lambda function is not available.
This will ensure that the documents are not lost and can be processed at a later time if the Lambda function is not available. By using Amazon SQS, the architecture is decoupled and the Lambda function can process the documents in a scalable and fault-tolerant manner.
upvoted 1 times
Bilalglg93350 3 months, 1 week ago
D. Créez une file d’attente Amazon Simple Queue Service (Amazon SQS). Envoyez les demandes à la file d’attente. Configurez la file d’attente en tant que source d’événement pour Lambda.
Cette solution permet de gérer efficacement les pics de charge et d'éviter la perte de documents en cas d'augmentation soudaine du trafic. Lorsque de nouveaux documents sont chargés dans le compartiment Amazon S3, les demandes sont envoyées à la file d'attente Amazon SQS, qui agit comme un tampon. La fonction Lambda est déclenchée en fonction des événements dans la file d'attente, ce qui permet un traitement équilibré et évite que l'application ne soit submergée par un grand nombre de documents simultanés.
upvoted 1 times
Russs99 3 months ago
exactement. si je pouvais explique come cela en Francais aussi
upvoted 1 times
WherecanIstart 3 months, 1 week ago
D is the correct answer.
upvoted 1 times
kampatra 3 months, 2 weeks ago
dcp 3 months, 2 weeks ago
D is correct
upvoted 1 times
taehyeki 3 months, 2 weeks ago
Question #405 Topic 1
A solutions architect is designing the architecture for a software demonstration environment. The environment will run on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer (ALB). The system will experience significant increases in traffic during working
hours but is not required to operate on weekends.
Which combination of actions should the solutions architect take to ensure that the system can scale to meet demand? (Choose two.)
Use AWS Auto Scaling to adjust the ALB capacity based on request rate.
Use AWS Auto Scaling to scale the capacity of the VPC internet gateway.
Launch the EC2 instances in multiple AWS Regions to distribute the load across Regions.
Use a target tracking scaling policy to scale the Auto Scaling group based on instance CPU utilization.
Use scheduled scaling to change the Auto Scaling group minimum, maximum, and desired capacity to zero for weekends. Revert to the default values at the start of the week.
Community vote distribution
DE (54%) AD (22%) AE (20%) 5%
channn Highly Voted 2 months, 3 weeks ago
A. Use AWS Auto Scaling to adjust the ALB capacity based on request rate: This will allow the system to scale up or down based on incoming traffic demand. The solutions architect should use AWS Auto Scaling to monitor the request rate and adjust the ALB capacity as needed.
D. Use a target tracking scaling policy to scale the Auto Scaling group based on instance CPU utilization: This will allow the system to scale up or down based on the CPU utilization of the EC2 instances in the Auto Scaling group. The solutions architect should use a target tracking scaling policy to maintain a specific CPU utilization target and adjust the number of EC2 instances in the Auto Scaling group accordingly.
upvoted 5 times
XaviL Most Recent 1 week ago
Hi guys, very simple
A. because the question are asking abount request rate!!!! This is a requirement!
E. The weekend is not necessary to execute anything!
A&D. Is not possible, way you can put an ALB capacity based in cpu and in request rate???? You need to select one or another option (and this is for all questions here guys!)
upvoted 2 times
RainWhisper 1 week, 2 days ago
ALBRequestCountPerTarget—Average Application Load Balancer request count per target. https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html#target-tracking-choose-metrics
It is possible to set to zero. "is not required to operate on weekends" means the instances are not required during the weekends. https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-capacity-limits.html
upvoted 1 times
Uzi_m 2 weeks, 6 days ago
Option E is incorrect because the question specifically mentions an increase in traffic during working hours. Therefore, it is not advisable to schedule the instances for 24 hours using default settings throughout the entire week.
E. Use scheduled scaling to change the Auto Scaling group minimum, maximum, and desired capacity to zero for weekends. Revert to the default values at the start of the week.
upvoted 1 times
omoakin 1 month ago AD are the correct answs upvoted 3 times
TariqKipkemei 1 month ago
Either one or two or all of these combinations will meet the need:
Use AWS Auto Scaling to adjust the ALB capacity based on request rate.
Use a target tracking scaling policy to scale the Auto Scaling group based on instance CPU utilization.
Use scheduled scaling to change the Auto Scaling group minimum, maximum, and desired capacity to zero for weekends. Revert to the default values at the start of the week.
upvoted 2 times
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html#target-tracking-choose-metrics
Based on docs, ASG can't track ALB's request rate, so the answer is D&E meanwhile ASG can track CPU rates.
upvoted 4 times
The link shows:
ALBRequestCountPerTarget—Average Application Load Balancer request count per target.
upvoted 2 times
kraken21 2 months, 3 weeks ago
Scaling should be at the ASG not ALB. So, not sure about "Use AWS Auto Scaling to adjust the ALB capacity based on request rate"
upvoted 4 times
A. Use a target tracking scaling policy to scale the Auto Scaling group based on instance CPU utilization. This approach allows the Auto Scaling group to automatically adjust the number of instances based on the specified metric, ensuring that the system can scale to meet demand during working hours.
D. Use scheduled scaling to change the Auto Scaling group minimum, maximum, and desired capacity to zero for weekends. Revert to the default values at the start of the week. This approach allows the Auto Scaling group to reduce the number of instances to zero during weekends when traffic is expected to be low. It will help the organization to save costs by not paying for instances that are not needed during weekends.
Therefore, options A and D are the correct answers. Options B and C are not relevant to the scenario, and option E is not a scalable solution as it would require manual intervention to adjust the group capacity every week.
upvoted 1 times
This is why I don't believe A is correct use auto scaling to adjust the ALB D&E
upvoted 3 times
AD
D there is no requirement for cost minimization in the scenario therefore, A & D are the answers
upvoted 3 times
Buruguduystunstugudunstuy 3 months ago
A comparison of Answers D and E VERSUS another possible answer Answers A and E:
Answers D and E:
D. Use a target tracking scaling policy to scale the Auto Scaling group based on instance CPU utilization.
E. Use scheduled scaling to change the Auto Scaling group minimum, maximum, and desired capacity to zero for weekends. Revert to the default values at the start of the week.
Answer D scales the Auto Scaling group based on instance CPU utilization, which ensures that the number of instances in the group can be adjusted to handle the increase in traffic during working hours and reduce capacity during periods of low traffic.
Answer E uses scheduled scaling to change the Auto Scaling group minimum, maximum, and desired capacity to zero for weekends, which ensures that the Auto Scaling group scales down to zero during weekends to save costs.
upvoted 1 times
Buruguduystunstugudunstuy 3 months ago
Answers A and E:
A. Use AWS Auto Scaling to adjust the ALB capacity based on request rate.
E. Use scheduled scaling to change the Auto Scaling group minimum, maximum, and desired capacity to zero for weekends. Revert to the default values at the start of the week.
Answer A adjusts the capacity of the ALB based on request rate, which ensures that the ALB can handle the increase in traffic during working hours and reduce capacity during periods of low traffic.
Answer E uses scheduled scaling to change the Auto Scaling group minimum, maximum, and desired capacity to zero for weekends, which ensures that the Auto Scaling group scales down to zero during weekends to save costs.
upvoted 1 times
Buruguduystunstugudunstuy 3 months ago
Comparing the two options, both Answers D and A are valid choices for scaling the application based on demand. However, Answer D scales the Auto Scaling group based on instance CPU utilization, which is a more granular metric than request rate and can provide better performance and cost optimization. Answer A only scales the ALB based on the request rate, which may not be sufficient for handling
sudden spikes in traffic.
Answer E is a common choice for scaling down to zero during weekends to save costs. Both Answers D and A can be used in conjunction with Answer E to ensure that the Auto Scaling group scales down to zero during weekends. However, Answer D provides more granular control over the scaling of the Auto Scaling group based on instance CPU utilization, which can result in better performance and cost optimization.
upvoted 2 times
Buruguduystunstugudunstuy 3 months ago
In conclusion, answers D and E provide a more granular and flexible solution for scaling the application based on demand and scaling down to zero during weekends, while Answers A and E may not be as granular and may not provide as much performance and cost optimization.
upvoted 3 times
thaotnt 3 months ago
A: The system will experience significant increases in traffic during working hours E: but is not required to operate on weekends.
upvoted 1 times
Kunj7 3 months, 1 week ago
Even though the question doesn't say anything about CPU utilisation. It does mention there will be " increase in traffic during working hours " which means the CPU utilisation will go up for the instance. Hence I think D & E is still correct.
upvoted 4 times
isoman 3 months, 1 week ago
Auto scaling group can't adjust ALB capacity.
upvoted 1 times
klayytech 2 months, 4 weeks ago
AWS Auto Scaling can adjust the Application Load Balancer (ALB) capacity based on request rate. You can use target tracking scaling policies to scale your ALB automatically based on a target value for a specific metric. For example, you can create a target tracking scaling policy that maintains an average request count per target of 1000 requests per minute. When you use target tracking scaling policies with Application Load Balancers, you can specify a target value for a request metric such as RequestCountPerTarget.
upvoted 1 times
UnluckyDucky 3 months, 2 weeks ago
Weird question, there's no mention to high CPU utilization therefore option D seems irrelevant.
Option A - Dealing with increased traffic by scaling according to request rate. Option E - Obvious reasons, shutdown on weekend, revert back when week starts.
upvoted 2 times
test_devops_aws 3 months, 1 week ago
i agree. AE is correct answer
upvoted 1 times
dcp 3 months, 2 weeks ago
Question #406 Topic 1
A solutions architect is designing a two-tiered architecture that includes a public subnet and a database subnet. The web servers in the public
subnet must be open to the internet on port 443. The Amazon RDS for MySQL DB instance in the database subnet must be accessible only to the web servers on port 3306.
Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)
Create a network ACL for the public subnet. Add a rule to deny outbound traffic to 0.0.0.0/0 on port 3306.
Create a security group for the DB instance. Add a rule to allow traffic from the public subnet CIDR block on port 3306.
Create a security group for the web servers in the public subnet. Add a rule to allow traffic from 0.0.0.0/0 on port 443.
Create a security group for the DB instance. Add a rule to allow traffic from the web servers’ security group on port 3306.
Create a security group for the DB instance. Add a rule to deny all traffic except traffic from the web servers’ security group on port 3306.
Community vote distribution
CD (100%)
datmd77 1 month, 3 weeks ago
Remember guys that SG is not used for Deny action, just Allow
upvoted 1 times
Buruguduystunstugudunstuy 3 months ago
To meet the requirements of allowing access to the web servers in the public subnet on port 443 and the Amazon RDS for MySQL DB instance in the database subnet on port 3306, the best solution would be to create a security group for the web servers and another security group for the DB instance, and then define the appropriate inbound and outbound rules for each security group.
Create a security group for the web servers in the public subnet. Add a rule to allow traffic from 0.0.0.0/0 on port 443.
Create a security group for the DB instance. Add a rule to allow traffic from the web servers' security group on port 3306.
This will allow the web servers in the public subnet to receive traffic from the internet on port 443, and the Amazon RDS for MySQL DB instance in the database subnet to receive traffic only from the web servers on port 3306.
upvoted 1 times
kampatra 3 months, 2 weeks ago
Eden 3 months, 2 weeks ago
I choose CE
upvoted 1 times
lili_9 3 months, 2 weeks ago
CE support @sitha
upvoted 1 times
sitha 3 months, 2 weeks ago
Answer: CE . The solution is to deny accessing DB from Internet and allow only access from webserver.
upvoted 1 times
KAUS2 3 months, 2 weeks ago
C & D are the right choices. correct
upvoted 1 times
KS2020 3 months, 2 weeks ago
why not CE?
upvoted 2 times
kampatra 3 months, 2 weeks ago
By default Security Group deny all trafic and we need to configure to enable.
upvoted 1 times
dcp 3 months, 2 weeks ago
Characteristics of security group rules
You can specify allow rules, but not deny rules. https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html
upvoted 1 times
taehyeki 3 months, 2 weeks ago
Question #407 Topic 1
A company is implementing a shared storage solution for a gaming application that is hosted in the AWS Cloud. The company needs the ability to use Lustre clients to access data. The solution must be fully managed.
Which solution meets these requirements?
Create an AWS DataSync task that shares the data as a mountable file system. Mount the file system to the application server.
Create an AWS Storage Gateway file gateway. Create a file share that uses the required client protocol. Connect the application server to the file share.
Create an Amazon Elastic File System (Amazon EFS) file system, and configure it to support Lustre. Attach the file system to the origin server. Connect the application server to the file system.
Create an Amazon FSx for Lustre file system. Attach the file system to the origin server. Connect the application server to the file system.
Community vote distribution
D (100%)
TariqKipkemei 1 month ago
Lustre clients = Amazon FSx for Lustre file system
upvoted 1 times
Buruguduystunstugudunstuy 3 months ago
To meet the requirements of a shared storage solution for a gaming application that can be accessed using Lustre clients and is fully managed, the best solution would be to use Amazon FSx for Lustre.
Amazon FSx for Lustre is a fully managed file system that is optimized for compute-intensive workloads, such as high-performance computing, machine learning, and gaming. It provides a POSIX-compliant file system that can be accessed using Lustre clients and offers high performance, scalability, and data durability.
This solution provides a highly available, scalable, and fully managed shared storage solution that can be accessed using Lustre clients. Amazon FSx for Lustre is optimized for compute-intensive workloads and provides high performance and durability.
upvoted 2 times
Buruguduystunstugudunstuy 3 months ago
Answer A, creating an AWS DataSync task that shares the data as a mountable file system and mounting the file system to the application server, may not provide the required performance and scalability for a gaming application.
Answer B, creating an AWS Storage Gateway file gateway and connecting the application server to the file share, may not provide the required performance and scalability for a gaming application.
Answer C, creating an Amazon Elastic File System (Amazon EFS) file system and configuring it to support Lustre, may not provide the required performance and scalability for a gaming application and may require additional configuration and management overhead.
upvoted 1 times
kampatra 3 months, 2 weeks ago
kprakashbehera 3 months, 2 weeks ago
FSx for Lustre DDDDDD
upvoted 1 times
KAUS2 3 months, 2 weeks ago
Amazon FSx for Lustre is the right answer
Lustre is a type of parallel distributed file system, for large-scale computing, Machine Learning, High Performance Computing (HPC)
Video Processing, Financial Modeling, Electronic Design Automatio
upvoted 1 times
cegama543 3 months, 2 weeks ago
Option D is the best solution because Amazon FSx for Lustre is a fully managed, high-performance file system that is designed to support compute-intensive workloads, such as those required by gaming applications. FSx for Lustre provides sub-millisecond access to petabyte-scale file
systems, and supports Lustre clients natively. This means that the gaming application can access the shared data directly from the FSx for Lustre file system without the need for additional configuration or setup.
Additionally, FSx for Lustre is a fully managed service, meaning that AWS takes care of all maintenance, updates, and patches for the file system, which reduces the operational overhead required by the company.
upvoted 1 times
taehyeki 3 months, 2 weeks ago
Question #408 Topic 1
A company runs an application that receives data from thousands of geographically dispersed remote devices that use UDP. The application processes the data immediately and sends a message back to the device if necessary. No data is stored.
The company needs a solution that minimizes latency for the data transmission from the devices. The solution also must provide rapid failover to another AWS Region.
Which solution will meet these requirements?
Configure an Amazon Route 53 failover routing policy. Create a Network Load Balancer (NLB) in each of the two Regions. Configure the NLB to invoke an AWS Lambda function to process the data.
Use AWS Global Accelerator. Create a Network Load Balancer (NLB) in each of the two Regions as an endpoint. Create an Amazon Elastic Container Service (Amazon ECS) cluster with the Fargate launch type. Create an ECS service on the cluster. Set the ECS service as the target for the NLProcess the data in Amazon ECS.
Use AWS Global Accelerator. Create an Application Load Balancer (ALB) in each of the two Regions as an endpoint. Create an Amazon Elastic Container Service (Amazon ECS) cluster with the Fargate launch type. Create an ECS service on the cluster. Set the ECS service as the target for the ALB. Process the data in Amazon ECS.
Configure an Amazon Route 53 failover routing policy. Create an Application Load Balancer (ALB) in each of the two Regions. Create an Amazon Elastic Container Service (Amazon ECS) cluster with the Fargate launch type. Create an ECS service on the cluster. Set the ECS
service as the target for the ALB. Process the data in Amazon ECS.
Community vote distribution
B (100%)
UnluckyDucky Highly Voted 3 months, 2 weeks ago
Key words: geographically dispersed, UDP.
Geographically dispersed (related to UDP) - Global Accelerator - multiple entrances worldwide to the AWS network to provide better transfer rates. UDP - NLB (Network Load Balancer).
upvoted 6 times
TariqKipkemei Most Recent 1 month ago
UDP = AWS Global Accelerator and Network Load Balancer
upvoted 1 times
kraken21 2 months, 3 weeks ago
Global accelerator for multi region automatic failover. NLB for UDP.
upvoted 1 times
MaxMa 2 months, 3 weeks ago
why not A?
upvoted 1 times
kraken21 2 months, 3 weeks ago
NLBs do not support lambda target type. Tricky!!! https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html
upvoted 6 times
Buruguduystunstugudunstuy 3 months ago
To meet the requirements of minimizing latency for data transmission from the devices and providing rapid failover to another AWS Region, the best solution would be to use AWS Global Accelerator in combination with a Network Load Balancer (NLB) and Amazon Elastic Container Service (Amazon ECS).
AWS Global Accelerator is a service that improves the availability and performance of applications by using static IP addresses (Anycast) to route traffic to optimal AWS endpoints. With Global Accelerator, you can direct traffic to multiple Regions and endpoints, and provide automatic failover to another AWS Region.
upvoted 2 times
Ruhi02 3 months, 2 weeks ago
Answer should be B.. there is typo mistake in B. Correct Answer is : Use AWS Global Accelerator. Create a Network Load Balancer (NLB) in each of the two Regions as an endpoint. Create an Amazon Elastic Container Service (Amazon ECS) cluster with the Fargate launch type. Create an ECS service on the cluster. Set the ECS service as the target for the NLB. Process the data in Amazon ECS.
upvoted 3 times
taehyeki 3 months, 2 weeks ago
Question #409 Topic 1
A solutions architect must migrate a Windows Internet Information Services (IIS) web application to AWS. The application currently relies on a file share hosted in the user's on-premises network-attached storage (NAS). The solutions architect has proposed migrating the IIS web servers to
Amazon EC2 instances in multiple Availability Zones that are connected to the storage solution, and configuring an Elastic Load Balancer attached to the instances.
Which replacement to the on-premises file share is MOST resilient and durable?
Migrate the file share to Amazon RDS.
Migrate the file share to AWS Storage Gateway.
Migrate the file share to Amazon FSx for Windows File Server.
Migrate the file share to Amazon Elastic File System (Amazon EFS).
Community vote distribution
C (93%) 7%
TariqKipkemei 1 month ago
Windows client = Amazon FSx for Windows File Server
upvoted 1 times
channn 2 months, 3 weeks ago
RDS is a database service
Storage Gateway is a hybrid cloud storage service that connects on-premises applications to AWS storage services.
D) provides shared file storage for Linux-based workloads, but it does not natively support Windows-based workloads.
upvoted 4 times
Buruguduystunstugudunstuy 3 months ago
The most resilient and durable replacement for the on-premises file share in this scenario would be Amazon FSx for Windows File Server.
Amazon FSx is a fully managed Windows file system service that is built on Windows Server and provides native support for the SMB protocol. It is designed to be highly available and durable, with built-in backup and restore capabilities. It is also fully integrated with AWS security services, providing encryption at rest and in transit, and it can be configured to meet compliance standards.
upvoted 3 times
Buruguduystunstugudunstuy 3 months ago
Migrating the file share to Amazon RDS or AWS Storage Gateway is not appropriate as these services are designed for database workloads and block storage respectively, and do not provide native support for the SMB protocol.
Migrating the file share to Amazon EFS (Linux ONLY) could be an option, but Amazon FSx for Windows File Server would be more appropriate in this case because it is specifically designed for Windows file shares and provides better performance for Windows applications.
upvoted 3 times
Grace83 3 months, 1 week ago
Obviously C is the correct answer - FSx for Windows - Windows
upvoted 4 times
UnluckyDucky 3 months, 2 weeks ago
FSx for Windows - Windows. EFS - Linux.
upvoted 2 times
elearningtakai 3 months, 2 weeks ago
Amazon EFS is a scalable and fully-managed file storage service that is designed to provide high availability and durability. It can be accessed by multiple EC2 instances across multiple Availability Zones simultaneously. Additionally, it offers automatic and instantaneous data replication across different availability zones within a region, which makes it resilient to failures.
upvoted 1 times
asoli 3 months, 1 week ago
EFS is a wrong choice because it can only work with Linux instances. That application has a Windows web server , so its OS is Windows and EFS cannot connect to it
upvoted 2 times
dcp 3 months, 2 weeks ago
sitha 3 months, 2 weeks ago
Amazon FSx makes it easy and cost effective to launch, run, and scale feature-rich, high-performance file systems in the cloud.
Answer : C
upvoted 1 times
KAUS2 3 months, 2 weeks ago
FSx for Windows is a fully managed Windows file system share drive . Hence C is the correct answer.
upvoted 1 times
Ruhi02 3 months, 2 weeks ago
FSx for Windows is ideal in this case. So answer is C.
upvoted 1 times
taehyeki 3 months, 2 weeks ago
Question #410 Topic 1
A company is deploying a new application on Amazon EC2 instances. The application writes data to Amazon Elastic Block Store (Amazon EBS) volumes. The company needs to ensure that all data that is written to the EBS volumes is encrypted at rest.
Which solution will meet this requirement?
A. Create an IAM role that specifies EBS encryption. Attach the role to the EC2 instances.
B. Create the EBS volumes as encrypted volumes. Attach the EBS volumes to the EC2 instances.
C. Create an EC2 instance tag that has a key of Encrypt and a value of True. Tag all instances that require encryption at the EBS level.
D. Create an AWS Key Management Service (AWS KMS) key policy that enforces EBS encryption in the account. Ensure that the key policy is active.
Community vote distribution
B (100%)
Buruguduystunstugudunstuy Highly Voted 3 months ago
The solution that will meet the requirement of ensuring that all data that is written to the EBS volumes is encrypted at rest is B. Create the EBS volumes as encrypted volumes and attach the encrypted EBS volumes to the EC2 instances.
When you create an EBS volume, you can specify whether to encrypt the volume. If you choose to encrypt the volume, all data written to the volume is automatically encrypted at rest using AWS-managed keys. You can also use customer-managed keys (CMKs) stored in AWS KMS to encrypt and protect your EBS volumes. You can create encrypted EBS volumes and attach them to EC2 instances to ensure that all data written to the volumes is encrypted at rest.
Answer A is incorrect because attaching an IAM role to the EC2 instances does not automatically encrypt the EBS volumes. Answer C is incorrect because adding an EC2 instance tag does not ensure that the EBS volumes are encrypted.
upvoted 5 times
TariqKipkemei Most Recent 1 month ago
Windows client = Amazon FSx for Windows File Server
upvoted 1 times
elearningtakai 2 months, 4 weeks ago
The other options either do not meet the requirement of encrypting data at rest (A and C) or do so in a more complex or less efficient manner (D).
upvoted 1 times
Bofi 3 months, 1 week ago
Why not D, EBS encryption require the use of KMS key
upvoted 1 times
Buruguduystunstugudunstuy 3 months ago
Answer D is incorrect because creating a KMS key policy that enforces EBS encryption does not automatically encrypt EBS volumes. You need to create encrypted EBS volumes and attach them to EC2 instances to ensure that all data written to the volumes are encrypted at rest.
upvoted 2 times
WherecanIstart 3 months, 1 week ago
Create encrypted EBS volumes and attach encrypted EBS volumes to EC2 instances..
upvoted 2 times
sitha 3 months, 2 weeks ago
Use Amazon EBS encryption as an encryption solution for your EBS resources associated with your EC2 instances.Select KMS Keys either default or custom
upvoted 1 times
Ruhi02 3 months, 2 weeks ago
Answer B. You can enable encryption for EBS volumes while creating them.
upvoted 1 times
taehyeki 3 months, 2 weeks ago
Question #411 Topic 1
A company has a web application with sporadic usage patterns. There is heavy usage at the beginning of each month, moderate usage at the start of each week, and unpredictable usage during the week. The application consists of a web server and a MySQL database server running inside the data center. The company would like to move the application to the AWS Cloud, and needs to select a cost-effective database platform that will
not require database modifications.
Which solution will meet these requirements?
A. Amazon DynamoDB
B. Amazon RDS for MySQL
C. MySQL-compatible Amazon Aurora Serverless
D. MySQL deployed on Amazon EC2 in an Auto Scaling group
Community vote distribution
C (94%) 6%
MrAWSAssociate 6 days, 5 hours ago
upvoted 1 times
antropaws 4 weeks, 1 day ago
channn 2 months, 3 weeks ago
C: Aurora Serverless is a MySQL-compatible relational database engine that automatically scales compute and memory resources based on application usage. no upfront costs or commitments required.
A: DynamoDB is a NoSQL B: Fixed cost on RDS class D: More operation requires
upvoted 4 times
Buruguduystunstugudunstuy 3 months ago
Answer C, MySQL-compatible Amazon Aurora Serverless, would be the best solution to meet the company's requirements.
Aurora Serverless can be a cost-effective option for databases with sporadic or unpredictable usage patterns since it automatically scales up or down based on the current workload. Additionally, Aurora Serverless is compatible with MySQL, so it does not require any modifications to the application's database code.
upvoted 3 times
klayytech 3 months ago
Amazon RDS for MySQL is a cost-effective database platform that will not require database modifications. It makes it easier to set up, operate, and scale MySQL deployments in the cloud. With Amazon RDS, you can deploy scalable MySQL servers in minutes with cost-efficient and resizable hardware capacity².
Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB is a good choice for applications that require low-latency data access¹.
MySQL-compatible Amazon Aurora Serverless is an on-demand, auto-scaling configuration for Amazon Aurora (MySQL-compatible edition), where the database will automatically start up, shut down, and scale capacity up or down based on your application's needs³.
So, Amazon RDS for MySQL is the best option for your requirements.
upvoted 1 times
klayytech 2 months, 4 weeks ago
sorry i will change to C , because
Amazon RDS for MySQL is a fully-managed relational database service that makes it easy to set up, operate, and scale MySQL deployments in
the cloud. Amazon Aurora Serverless is an on-demand, auto-scaling configuration for Amazon Aurora (MySQL-compatible edition), where the database will automatically start up, shut down, and scale capacity up or down based on your application’s needs. It is a simple, cost-effective option for infrequent, intermittent, or unpredictable workloads.
upvoted 2 times
boxu03 3 months, 2 weeks ago
Amazon Aurora Serverless : a simple, cost-effective option for infrequent, intermittent, or unpredictable workloads
upvoted 3 times
taehyeki 3 months, 2 weeks ago
Question #412 Topic 1
An image-hosting company stores its objects in Amazon S3 buckets. The company wants to avoid accidental exposure of the objects in the S3 buckets to the public. All S3 objects in the entire AWS account need to remain private.
Which solution will meet these requirements?
A. Use Amazon GuardDuty to monitor S3 bucket policies. Create an automatic remediation action rule that uses an AWS Lambda function to remediate any change that makes the objects public.
B. Use AWS Trusted Advisor to find publicly accessible S3 buckets. Configure email notifications in Trusted Advisor when a change is detected. Manually change the S3 bucket policy if it allows public access.
C. Use AWS Resource Access Manager to find publicly accessible S3 buckets. Use Amazon Simple Notification Service (Amazon SNS) to invoke an AWS Lambda function when a change is detected. Deploy a Lambda function that programmatically remediates the change.
D. Use the S3 Block Public Access feature on the account level. Use AWS Organizations to create a service control policy (SCP) that prevents IAM users from changing the setting. Apply the SCP to the account.
Community vote distribution
D (90%) 10%
Ruhi02 Highly Voted 3 months, 2 weeks ago
Answer is D ladies and gentlemen. While guard duty helps to monitor s3 for potential threats its a reactive action. We should always be proactive and not reactive in our solutions so D, block public access to avoid any possibility of the info becoming publicly accessible
upvoted 10 times
MrAWSAssociate Most Recent 6 days, 5 hours ago
Yadav_Sanjay 1 month, 1 week ago
https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html
upvoted 2 times
elearningtakai 2 months, 4 weeks ago
This is the most effective solution to meet the requirements.
upvoted 2 times
Buruguduystunstugudunstuy 3 months ago
Answer D is the correct solution that meets the requirements. The S3 Block Public Access feature allows you to restrict public access to S3 buckets and objects within the account. You can enable this feature at the account level to prevent any S3 bucket from being made public, regardless of the bucket policy settings. AWS Organizations can be used to apply a Service Control Policy (SCP) to the account to prevent IAM users from changing this setting, ensuring that all S3 objects remain private. This is a straightforward and effective solution that requires minimal operational overhead.
upvoted 2 times
Bofi 3 months, 1 week ago
Option D provided real solution by using bucket policy to restrict public access. Other options were focus on detection which wasn't what was been asked
upvoted 2 times
taehyeki 3 months, 2 weeks ago
Question #413 Topic 1
An ecommerce company is experiencing an increase in user traffic. The company’s store is deployed on Amazon EC2 instances as a two-tier web application consisting of a web tier and a separate database tier. As traffic increases, the company notices that the architecture is causing
significant delays in sending timely marketing and order confirmation email to users. The company wants to reduce the time it spends resolving complex email delivery issues and minimize operational overhead.
What should a solutions architect do to meet these requirements?
A. Create a separate application tier using EC2 instances dedicated to email processing.
B. Configure the web instance to send email through Amazon Simple Email Service (Amazon SES).
C. Configure the web instance to send email through Amazon Simple Notification Service (Amazon SNS).
D. Create a separate application tier using EC2 instances dedicated to email processing. Place the instances in an Auto Scaling group.
Community vote distribution
B (100%)
elearningtakai 2 months, 4 weeks ago
Amazon SES is a cost-effective and scalable email service that enables businesses to send and receive email using their own email addresses and domains. Configuring the web instance to send email through Amazon SES is a simple and effective solution that can reduce the time spent resolving complex email delivery issues and minimize operational overhead.
upvoted 4 times
Buruguduystunstugudunstuy 3 months ago
The best option for addressing the company's needs of minimizing operational overhead and reducing time spent resolving email delivery issues is to use Amazon Simple Email Service (Amazon SES).
Answer A of creating a separate application tier for email processing may add additional complexity to the architecture and require more operational overhead.
Answer C of using Amazon Simple Notification Service (Amazon SNS) is not an appropriate solution for sending marketing and order confirmation emails since Amazon SNS is a messaging service that is designed to send messages to subscribed endpoints or clients.
Answer D of creating a separate application tier using EC2 instances dedicated to email processing placed in an Auto Scaling group is a more complex solution than necessary and may result in additional operational overhead.
upvoted 2 times
nileshlg 3 months, 2 weeks ago
Answer is B
upvoted 2 times
Ruhi02 3 months, 2 weeks ago
Answer B.. SES is meant for sending high volume e-mail efficiently and securely.
SNS is meant as a channel publisher/subscriber service
upvoted 4 times
taehyeki 3 months, 2 weeks ago
Question #414 Topic 1
A company has a business system that generates hundreds of reports each day. The business system saves the reports to a network share in CSV format. The company needs to store this data in the AWS Cloud in near-real time for analysis.
Which solution will meet these requirements with the LEAST administrative overhead?
A. Use AWS DataSync to transfer the files to Amazon S3. Create a scheduled task that runs at the end of each day.
B. Create an Amazon S3 File Gateway. Update the business system to use a new network share from the S3 File Gateway.
C. Use AWS DataSync to transfer the files to Amazon S3. Create an application that uses the DataSync API in the automation workflow.
D. Deploy an AWS Transfer for SFTP endpoint. Create a script that checks for new files on the network share and uploads the new files by using SFTP.
Community vote distribution
B (88%) 13%
antropaws 4 weeks, 1 day ago
B. Data Sync is better for one time migrations.
upvoted 2 times
kruasan 2 months ago
The correct solution here is:
B. Create an Amazon S3 File Gateway. Update the business system to use a new network share from the S3 File Gateway. This option requires the least administrative overhead because:
It presents a simple network file share interface that the business system can write to, just like a standard network share. This requires minimal changes to the business system.
The S3 File Gateway automatically uploads all files written to the share to an S3 bucket in the background. This handles the transfer and upload to S3 without requiring any scheduled tasks, scripts or automation.
All ongoing management like monitoring, scaling, patching etc. is handled by AWS for the S3 File Gateway.
upvoted 2 times
kruasan 2 months ago
The other options would require more ongoing administrative effort:
A) AWS DataSync would require creating and managing scheduled tasks and monitoring them.
Using the DataSync API would require developing an application and then managing and monitoring it.
The SFTP option would require creating scripts, managing SFTP access and keys, and monitoring the file transfer process.
So overall, the S3 File Gateway requires the least amount of ongoing management and administration as it presents a simple file share interface but handles the upload to S3 in a fully managed fashion. The business system can continue writing to a network share as is, while the files are transparently uploaded to S3.
The S3 File Gateway is the most hands-off, low-maintenance solution in this scenario.
upvoted 2 times
channn 2 months, 3 weeks ago
Key words:
near-real-time (A is out)
LEAST administrative (C n D is out)
upvoted 3 times
elearningtakai 2 months, 4 weeks ago
A - creating a scheduled task is not near-real time.
B - The S3 File Gateway caches frequently accessed data locally and automatically uploads it to Amazon S3, providing near-real-time access to the data.
C - creating an application that uses the DataSync API in the automation workflow may provide near-real-time data access, but it requires additional development effort.
D - it requires additional development effort.
upvoted 3 times
zooba72 3 months ago
It's B. DataSync has a scheduler and it runs on hour intervals, it cannot be used real-time
upvoted 1 times
Buruguduystunstugudunstuy 3 months ago
The correct answer is C. Use AWS DataSync to transfer the files to Amazon S3. Create an application that uses the DataSync API in the automation workflow.
To store the CSV reports generated by the business system in the AWS Cloud in near-real time for analysis, the best solution with the least administrative overhead would be to use AWS DataSync to transfer the files to Amazon S3 and create an application that uses the DataSync API in the automation workflow.
AWS DataSync is a fully managed service that makes it easy to automate and accelerate data transfer between on-premises storage systems and AWS Cloud storage, such as Amazon S3. With DataSync, you can quickly and securely transfer large amounts of data to the AWS Cloud, and you can automate the transfer process using the DataSync API.
upvoted 2 times
Buruguduystunstugudunstuy 3 months ago
Answer A, using AWS DataSync to transfer the files to Amazon S3 and creating a scheduled task that runs at the end of each day, is not the best solution because it does not meet the requirement of storing the CSV reports in near-real time for analysis.
Answer B, creating an Amazon S3 File Gateway and updating the business system to use a new network share from the S3 File Gateway, is not the best solution because it requires additional configuration and management overhead.
Answer D, deploying an AWS Transfer for the SFTP endpoint and creating a script to check for new files on the network share and upload the new files using SFTP, is not the best solution because it requires additional scripting and management overhead
upvoted 1 times
COTIT 3 months, 1 week ago
I think B is the better answer, "LEAST administrative overhead" https://aws.amazon.com/storagegateway/file/?nc1=h_ls
upvoted 3 times
andyto 3 months, 1 week ago
B - S3 File Gateway.
C - this is wrong answer because data migration is scheduled (this is not continuous task), so condition "near-real time" is not fulfilled
upvoted 1 times
Thief 3 months, 1 week ago
C is the best ans
upvoted 1 times
lizzard812 3 months ago
Why not A? There is no scheduled job?
upvoted 1 times
Question #415 Topic 1
A company is storing petabytes of data in Amazon S3 Standard. The data is stored in multiple S3 buckets and is accessed with varying frequency. The company does not know access patterns for all the data. The company needs to implement a solution for each S3 bucket to optimize the cost of S3 usage.
Which solution will meet these requirements with the MOST operational efficiency?
Create an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3 Intelligent-Tiering.
Use the S3 storage class analysis tool to determine the correct tier for each object in the S3 bucket. Move each object to the identified storage tier.
Create an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3 Glacier Instant Retrieval.
Create an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3 One Zone-Infrequent Access (S3 One Zone-IA).
Community vote distribution
A (100%)
TariqKipkemei 4 weeks, 1 day ago
Unknown access patterns for the data = S3 Intelligent-Tiering
upvoted 1 times
channn 2 months, 3 weeks ago
Key words: 'The company does not know access patterns for all the data', so A.
upvoted 2 times
Buruguduystunstugudunstuy 3 months ago
The correct answer is A.
Creating an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3 Intelligent-Tiering would be the most efficient solution to optimize the cost of S3 usage. S3 Intelligent-Tiering is a storage class that automatically moves objects between two access tiers (frequent and infrequent) based on changing access patterns. It is a cost-effective solution that does not require any manual intervention to move data to different storage classes, unlike the other options.
upvoted 2 times
Buruguduystunstugudunstuy 3 months ago
Answer B, Using the S3 storage class analysis tool to determine the correct tier for each object and manually moving objects to the identified storage tier would be time-consuming and require more operational overhead.
Answer C, Transitioning objects to S3 Glacier Instant Retrieval would be appropriate for data that is accessed less frequently and does not require immediate access.
Answer D, S3 One Zone-IA would be appropriate for data that can be recreated if lost and does not require the durability of S3 Standard or S3 Standard-IA.
upvoted 1 times
COTIT 3 months, 1 week ago
For me is A. Create an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3 Intelligent-Tiering.
Why?
"S3 Intelligent-Tiering is the ideal storage class for data with unknown, changing, or unpredictable access patterns" https://aws.amazon.com/s3/storage-classes/intelligent-tiering/
upvoted 2 times
Bofi 3 months, 1 week ago
Once the data traffic is unpredictable, Intelligent-Tiering is the best option
upvoted 2 times
NIL8891 3 months, 1 week ago
Create an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3 Intelligent-Tiering.
upvoted 1 times
Maximus007 3 months, 1 week ago
A: as exact pattern is not clear
upvoted 2 times
Question #416 Topic 1
A rapidly growing global ecommerce company is hosting its web application on AWS. The web application includes static content and dynamic content. The website stores online transaction processing (OLTP) data in an Amazon RDS database The website’s users are experiencing slow page loads.
Which combination of actions should a solutions architect take to resolve this issue? (Choose two.)
Configure an Amazon Redshift cluster.
Set up an Amazon CloudFront distribution.
Host the dynamic web content in Amazon S3.
Create a read replica for the RDS DB instance.
Configure a Multi-AZ deployment for the RDS DB instance.
Community vote distribution
BD (82%) Other
Buruguduystunstugudunstuy Highly Voted 3 months ago
To resolve the issue of slow page loads for a rapidly growing e-commerce website hosted on AWS, a solutions architect can take the following two actions:
Set up an Amazon CloudFront distribution
Create a read replica for the RDS DB instance
Configuring an Amazon Redshift cluster is not relevant to this issue since Redshift is a data warehousing service and is typically used for the analytical processing of large amounts of data.
Hosting the dynamic web content in Amazon S3 may not necessarily improve performance since S3 is an object storage service, not a web application server. While S3 can be used to host static web content, it may not be suitable for hosting dynamic web content since S3 doesn't support server-side scripting or processing.
Configuring a Multi-AZ deployment for the RDS DB instance will improve high availability but may not necessarily improve performance.
upvoted 7 times
antropaws Most Recent 4 weeks, 1 day ago
TariqKipkemei 4 weeks, 1 day ago
Resolve latency = Amazon CloudFront distribution and read replica for the RDS DB
upvoted 3 times
SamDouk 3 months ago
B and D
upvoted 2 times
klayytech 3 months ago
The website’s users are experiencing slow page loads.
To resolve this issue, a solutions architect should take the following two actions:
Create a read replica for the RDS DB instance. This will help to offload read traffic from the primary database instance and improve performance.
upvoted 2 times
zooba72 3 months ago
Question asked about performance improvements, not HA. Cloudfront & Read Replica
upvoted 2 times
thaotnt 3 months ago
slow page loads. >>> D
upvoted 2 times
andyto 3 months, 1 week ago
Read Replica will speed up Reads on RDS DB.
E is wrong. It brings HA but doesn't contribute to speed which is impacted in this case. Multi-AZ is Active-Standby solution.
upvoted 1 times
COTIT 3 months, 1 week ago
I agree with B & E.
B. Set up an Amazon CloudFront distribution. (Amazon CloudFront is a content delivery network (CDN) service)
E. Configure a Multi-AZ deployment for the RDS DB instance. (Good idea for loadbalance the DB workflow)
upvoted 2 times
Santosh43 3 months, 1 week ago
B and E ( as there is nothing mention about read transactions)
upvoted 1 times
Akademik6 3 months, 1 week ago
Cloudfront and Read Replica. We don't need HA here.
upvoted 3 times
acts268 3 months, 1 week ago
Cloud Front and Read Replica
upvoted 4 times
Bofi 3 months, 1 week ago
Amazon CloudFront can handle both static and Dynamic contents hence there is not need for option C l.e hosting the static data on Amazon S3. RDS read replica will reduce the amount of reads on the RDS hence leading a better performance. Multi-AZ is for disaster Recovery , which means D is also out.
upvoted 1 times
Thief 3 months, 1 week ago
NIL8891 3 months, 1 week ago
B and E
upvoted 2 times
Question #417 Topic 1
A company uses Amazon EC2 instances and AWS Lambda functions to run its application. The company has VPCs with public subnets and private subnets in its AWS account. The EC2 instances run in a private subnet in one of the VPCs. The Lambda functions need direct network access to
the EC2 instances for the application to work.
The application will run for at least 1 year. The company expects the number of Lambda functions that the application uses to increase during that time. The company wants to maximize its savings on all application resources and to keep network latency between the services low.
Which solution will meet these requirements?
A. Purchase an EC2 Instance Savings Plan Optimize the Lambda functions’ duration and memory usage and the number of invocations. Connect the Lambda functions to the private subnet that contains the EC2 instances.
B. Purchase an EC2 Instance Savings Plan Optimize the Lambda functions' duration and memory usage, the number of invocations, and the amount of data that is transferred. Connect the Lambda functions to a public subnet in the same VPC where the EC2 instances run.
C. Purchase a Compute Savings Plan. Optimize the Lambda functions’ duration and memory usage, the number of invocations, and the amount of data that is transferred. Connect the Lambda functions to the private subnet that contains the EC2 instances.
D. Purchase a Compute Savings Plan. Optimize the Lambda functions’ duration and memory usage, the number of invocations, and the amount of data that is transferred. Keep the Lambda functions in the Lambda service VPC.
Community vote distribution
C (100%)
Buruguduystunstugudunstuy Highly Voted 3 months ago
Answer C is the best solution that meets the company’s requirements.
By purchasing a Compute Savings Plan, the company can save on the costs of running both EC2 instances and Lambda functions. The Lambda functions can be connected to the private subnet that contains the EC2 instances through a VPC endpoint for AWS services or a VPC peering connection. This provides direct network access to the EC2 instances while keeping the traffic within the private network, which helps to minimize network latency.
Optimizing the Lambda functions’ duration, memory usage, number of invocations, and amount of data transferred can help to further minimize costs and improve performance. Additionally, using a private subnet helps to ensure that the EC2 instances are not directly accessible from the public internet, which is a security best practice.
upvoted 6 times
Buruguduystunstugudunstuy 3 months ago
Answer A is not the best solution because connecting the Lambda functions directly to the private subnet that contains the EC2 instances may not be scalable as the number of Lambda functions increases. Additionally, using an EC2 Instance Savings Plan may not provide savings on the costs of running Lambda functions.
Answer B is not the best solution because connecting the Lambda functions to a public subnet may not be as secure as connecting them to a private subnet. Also, keeping the EC2 instances in a private subnet helps to ensure that they are not directly accessible from the public internet.
Answer D is not the best solution because keeping the Lambda functions in the Lambda service VPC may not provide direct network access to the EC2 instances, which may impact the performance of the application.
upvoted 2 times
elearningtakai Most Recent 3 months ago
Connect Lambda to Private Subnet contains EC2
upvoted 1 times
zooba72 3 months ago
Compute savings plan covers both EC2 & Lambda
upvoted 2 times
Zox42 3 months, 1 week ago
C. I would go with C, because Compute savings plans cover Lambda as well.
upvoted 1 times
andyto 3 months, 1 week ago
A. I would go with A. Saving and low network latency are required. EC2 instance savings plans offer savings of up to 72%
Compute savings plans offer savings of up to 66%
Placing Lambda on the same private network with EC2 instances provides the lowest latency.
upvoted 1 times
abitwrong 3 months, 1 week ago
EC2 Instance Savings Plans apply to EC2 usage only. Compute Savings Plans apply to usage across Amazon EC2, AWS Lambda, and AWS Fargate. (https://aws.amazon.com/savingsplans/faq/)
Lambda functions need direct network access to the EC2 instances for the application to work and these EC2 instances are in the private subnet. So the correct answer is C.
upvoted 1 times
Question #418 Topic 1
A solutions architect needs to allow team members to access Amazon S3 buckets in two different AWS accounts: a development account and a production account. The team currently has access to S3 buckets in the development account by using unique IAM users that are assigned to an IAM group that has appropriate permissions in the account.
The solutions architect has created an IAM role in the production account. The role has a policy that grants access to an S3 bucket in the production account.
Which solution will meet these requirements while complying with the principle of least privilege?
A. Attach the Administrator Access policy to the development account users.
B. Add the development account as a principal in the trust policy of the role in the production account.
C. Turn off the S3 Block Public Access feature on the S3 bucket in the production account.
D. Create a user in the production account with unique credentials for each team member.
Community vote distribution
B (100%)
kels1 Highly Voted 2 months, 1 week ago
well, if you made it this far, it means you are persistent :) Good luck with your exam!
upvoted 19 times
SkyZeroZx 1 month, 3 weeks ago
Thanks good luck for all
upvoted 4 times
gpt_test Most Recent 2 months, 3 weeks ago
By adding the development account as a principal in the trust policy of the IAM role in the production account, you are allowing users from the development account to assume the role in the production account. This allows the team members to access the S3 bucket in the production account without granting them unnecessary privileges.
upvoted 2 times
elearningtakai 3 months ago
About Trust policy – The trust policy defines which principals can assume the role, and under which conditions. A trust policy is a specific type of resource-based policy for IAM roles.
Answer A: overhead permission Admin to development.
Answer C: Block public access is a security best practice and seems not relevant to this scenario.
Answer D: difficult to manage and scale
upvoted 1 times
Buruguduystunstugudunstuy 3 months ago
Answer A, attaching the Administrator Access policy to development account users, provides too many permissions and violates the principle of least privilege. This would give users more access than they need, which could lead to security issues if their credentials are compromised.
Answer C, turning off the S3 Block Public Access feature, is not a recommended solution as it is a security best practice to enable S3 Block Public Access to prevent accidental public access to S3 buckets.
Answer D, creating a user in the production account with unique credentials for each team member, is also not a recommended solution as it can be difficult to manage and scale for large teams. It is also less secure, as individual user credentials can be more easily compromised.
upvoted 2 times
klayytech 3 months ago
The solution that will meet these requirements while complying with the principle of least privilege is to add the development account as a principal in the trust policy of the role in the production account. This will allow team members to access Amazon S3 buckets in two different AWS accounts while complying with the principle of least privilege.
Option A is not recommended because it grants too much access to development account users. Option C is not relevant to this scenario. Option D is not recommended because it does not comply with the principle of least privilege.
upvoted 1 times
Akademik6 3 months, 1 week ago
B is the correct answer
upvoted 2 times
Question #419 Topic 1
A company uses AWS Organizations with all features enabled and runs multiple Amazon EC2 workloads in the ap-southeast-2 Region. The company has a service control policy (SCP) that prevents any resources from being created in any other Region. A security policy requires the company to encrypt all data at rest.
An audit discovers that employees have created Amazon Elastic Block Store (Amazon EBS) volumes for EC2 instances without encrypting the volumes. The company wants any new EC2 instances that any IAM user or root user launches in ap-southeast-2 to use encrypted EBS volumes. The company wants a solution that will have minimal effect on employees who create EBS volumes.
Which combination of steps will meet these requirements? (Choose two.)
In the Amazon EC2 console, select the EBS encryption account attribute and define a default encryption key.
Create an IAM permission boundary. Attach the permission boundary to the root organizational unit (OU). Define the boundary to deny the ec2:CreateVolume action when the ec2:Encrypted condition equals false.
Create an SCP. Attach the SCP to the root organizational unit (OU). Define the SCP to deny the ec2:CreateVolume action whenthe ec2:Encrypted condition equals false.
Update the IAM policies for each account to deny the ec2:CreateVolume action when the ec2:Encrypted condition equals false.
In the Organizations management account, specify the Default EBS volume encryption setting.
Community vote distribution
CE (100%)
Buruguduystunstugudunstuy 2 weeks, 1 day ago
SCPs are a great way to enforce policies across an entire AWS Organization, preventing users from creating resources that do not comply with the set policies.
In AWS Management Console, one can go to EC2 dashboard -> Settings -> Data encryption -> Check "Always encrypt new EBS volumes" and choose a default KMS key. This ensures that every new EBS volume created will be encrypted by default, regardless of how it is created.
upvoted 1 times
PRASAD180 1 month ago
1000% CE crt
upvoted 1 times
RainWhisper 1 month, 1 week ago
Encryption by default allows you to ensure that all new EBS volumes created in your account are always encrypted, even if you don’t specify encrypted=true request parameter.
https://aws.amazon.com/blogs/compute/must-know-best-practices-for-amazon-ebs-encryption/
upvoted 1 times
hiroohiroo 1 month, 1 week ago
CとEが正しいと考える。
upvoted 1 times
Axaus 1 month, 1 week ago
CE
Prevent future issues by creating a SCP and set a default encryption.
upvoted 4 times
Efren 1 month, 1 week ago
nosense 1 month, 2 weeks ago
SCP that denies the ec2:CreateVolume action when the ec2:Encrypted condition equals false. This will prevent users and service accounts in member accounts from creating unencrypted EBS volumes in the ap-southeast-2 Region.
upvoted 2 times
Efren 1 month, 1 week ago
agreed
upvoted 1 times
Question #420 Topic 1
A company wants to use an Amazon RDS for PostgreSQL DB cluster to simplify time-consuming database administrative tasks for production database workloads. The company wants to ensure that its database is highly available and will provide automatic failover support in most
scenarios in less than 40 seconds. The company wants to offload reads off of the primary instance and keep costs as low as possible. Which solution will meet these requirements?
Use an Amazon RDS Multi-AZ DB instance deployment. Create one read replica and point the read workload to the read replica.
Use an Amazon RDS Multi-AZ DB duster deployment Create two read replicas and point the read workload to the read replicas.
Use an Amazon RDS Multi-AZ DB instance deployment. Point the read workload to the secondary instances in the Multi-AZ pair.
Use an Amazon RDS Multi-AZ DB cluster deployment Point the read workload to the reader endpoint.
Community vote distribution
D (61%) A (39%)
Buruguduystunstugudunstuy 2 weeks, 1 day ago
The correct answer is:
D. Use an Amazon RDS Multi-AZ DB cluster deployment. Point the read workload to the reader endpoint.
Explanation:
The company wants high availability, automatic failover support in less than 40 seconds, read offloading from the primary instance, and cost-effectiveness.
Answer D is the best choice for several reasons:
Amazon RDS Multi-AZ deployments provide high availability and automatic failover support.
In a Multi-AZ DB cluster, Amazon RDS automatically provisions and maintains a standby in a different Availability Zone. If a failure occurs, Amazon RDS performs an automatic failover to the standby, minimizing downtime.
The "Reader endpoint" for an Amazon RDS DB cluster provides load-balancing support for read-only connections to the DB cluster. Directing read traffic to the reader endpoint helps in offloading read operations from the primary instance.
upvoted 2 times
TariqKipkemei 3 weeks ago
This is as case where both option A and D can work, but option D gives 2 DB instances for read compared to only 1 given by option A. Costwise they are the same as both options use 3 DB instances.
upvoted 1 times
Henrytml 4 weeks, 1 day ago
lowest cost option, and effective with read replica
upvoted 3 times
antropaws 4 weeks, 1 day ago
It's D. Read well: "A company wants to use an Amazon RDS for PostgreSQL DB CLUSTER".
upvoted 1 times
RainWhisper 4 weeks, 1 day ago
A Multi-AZ DB cluster deployment is a semisynchronous, high availability deployment mode of Amazon RDS with two readable standby DB instances. A Multi-AZ DB cluster has a writer DB instance and two reader DB instances in three separate Availability Zones in the same AWS Region. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html
Amazon RDS Multi-AZ with two readable standbys. Automatically fail over in typically under 35 seconds https://aws.amazon.com/rds/features/multi-az/
upvoted 1 times
RainWhisper 4 weeks, 1 day ago
A Multi-AZ DB cluster deployment is a semisynchronous, high availability deployment mode of Amazon RDS with two readable standby DB instances. A Multi-AZ DB cluster has a writer DB instance and two reader DB instances in three separate Availability Zones in the same AWS Region.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html
Amazon RDS Multi-AZ with two readable standbys. Automatically fail over in typically under 35 seconds https://aws.amazon.com/rds/features/multi-az/
upvoted 1 times
omoakin 1 month ago
D.
Use an Amazon RDS Multi-AZ DB cluster deployment Point the read workload to the reader endpoint.
upvoted 1 times
coldgin37 1 month ago
D - Instance deployment Failover times are typically 60–120 seconds, so a clustered deployment is required for 40sec or less https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZSingleStandby.html
upvoted 2 times
elmogy 1 month ago
D for two reasons,
Failover times are typically 60–120 seconds in RDS Multi-AZ DB instance deployment.
we can use the secondary DB for read (it can be used on RDS Multi-AZ DB cluster), and that's will "keep the cost as low as possible"
upvoted 3 times
ogerber 1 month ago
A - multi-az instance : failover takes between 60-120 sec D - multi-az cluster: failover around 35 sec
upvoted 3 times
Cipi 1 month, 1 week ago
In both options A and B we have 3 database instances:
Option A: 1 instance for read and write, 1 standby instance and 1 additional instance for read
Option B: 1 instance for read and write and 2 instances for both read and standby
Thus, option B gives 2 DB instances for read compared to only 1 given by option A and costs seems to be in favor of option B in case we consider on-demand instances (https://aws.amazon.com/rds/postgresql/pricing/?pg=pr&loc=3). So I consider option B is better
upvoted 1 times
Axaus 1 month, 1 week ago
A.
It has to be cost effective. Multi A-Z for availability and 1 read replica.
upvoted 1 times
greyrose 1 month, 2 weeks ago
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
upvoted 1 times
nosense 1 month, 2 weeks ago
RDS Multi-AZ DB instance deployment is a highly available and scalable database deployment option that provides automatic failover support in most scenarios in less than 40 seconds.
upvoted 2 times
Question #421 Topic 1
A company runs a highly available SFTP service. The SFTP service uses two Amazon EC2 Linux instances that run with elastic IP addresses to accept traffic from trusted IP sources on the internet. The SFTP service is backed by shared storage that is attached to the instances. User
accounts are created and managed as Linux users in the SFTP servers.
The company wants a serverless option that provides high IOPS performance and highly configurable security. The company also wants to maintain control over user permissions.
Which solution will meet these requirements?
A. Create an encrypted Amazon Elastic Block Store (Amazon EBS) volume. Create an AWS Transfer Family SFTP service with a public endpoint that allows only trusted IP addresses. Attach the EBS volume to the SFTP service endpoint. Grant users access to the SFTP service.
B. Create an encrypted Amazon Elastic File System (Amazon EFS) volume. Create an AWS Transfer Family SFTP service with elastic IP
addresses and a VPC endpoint that has internet-facing access. Attach a security group to the endpoint that allows only trusted IP addresses. Attach the EFS volume to the SFTP service endpoint. Grant users access to the SFTP service.
C. Create an Amazon S3 bucket with default encryption enabled. Create an AWS Transfer Family SFTP service with a public endpoint that allows only trusted IP addresses. Attach the S3 bucket to the SFTP service endpoint. Grant users access to the SFTP service.
D. Create an Amazon S3 bucket with default encryption enabled. Create an AWS Transfer Family SFTP service with a VPC endpoint that has internal access in a private subnet. Attach a security group that allows only trusted IP addresses. Attach the S3 bucket to the SFTP service endpoint. Grant users access to the SFTP service.
Community vote distribution
B (65%) D (24%) 12%
Axeashes 2 weeks, 1 day ago
https://aws.amazon.com/blogs/storage/use-ip-whitelisting-to-secure-your-aws-transfer-for-sftp-servers/
upvoted 1 times
TariqKipkemei 3 weeks ago
EFS is best to serve this purpose.
upvoted 1 times
alexandercamachop 3 weeks, 3 days ago
First Serverless - EFS
Second it says it is attached to the Linux instances at the same time, only EFS can do that.
upvoted 1 times
envest 4 weeks, 1 day ago
Answer C (from abylead.com)
Transfer Family offers fully managed serverless support for B2B file transfers via SFTP, AS2, FTPS, & FTP directly in & out of S3 or EFS. For a controlled internet access you can use internet-facing endpts with Transfer SFTP servers & restrict trusted internet sources with VPC's default Sgrp. In addition, S3 Access Points aliases allows you to use S3 bkt names for a unique access control plcy on shared S3 datasets.
Transfer SFTP & S3: https://aws.amazon.com/blogs/apn/how-to-use-aws-transfer-family-to-replace-and-scale-sftp-servers/
Transfer SFTP doesn’t support EBS, not for share data, & not serverless: infeasible.
EFS mounts via ENIs not endpts: infeasible.
D)pub endpt for internet access is missing: infeasible.
upvoted 1 times
omoakin 1 month ago
BBBBBBBBBBBBBB
upvoted 1 times
vesen22 1 month ago
norris81 1 month ago
https://aws.amazon.com/blogs/storage/use-ip-whitelisting-to-secure-your-aws-transfer-for-sftp-servers/ is worth a read
upvoted 2 times
EFS is serverless. There is no reference in S3 about IOPS
upvoted 2 times
Option D is incorrect because it suggests using an S3 bucket in a private subnet with a VPC endpoint, which may not meet the requirement of maintaining control over user permissions as effectively as the EFS-based solution.
upvoted 2 times
It is D
Refer https://docs.aws.amazon.com/transfer/latest/userguide/create-server-in-vpc.html for further details.
upvoted 1 times
EFS is serverless and has high IOPS.
regardless the IOPS, I believe option D is incorrect because it is internal, and the request needs internet access
upvoted 2 times
The reason is that AWS Transfer Family is a serverless option that provides a fully managed service for transferring files over Secure Shell (SSH) File Transfer Protocol (SFTP), File Transfer Protocol over SSL (FTPS), and File Transfer Protocol (FTP). It allows you to use your existing authentication systems and store your data in Amazon S3 or Amazon EFS. It also provides high IOPS performance and highly configurable security option
upvoted 1 times
The question is requiring highly configurable security --> that excludes default S3 encryption, which is SSE-S3 (is not configurable)
upvoted 1 times
Option D is not the best choice for this scenario because the AWS Transfer Family SFTP service, when configured with a VPC endpoint that has internal access in a private subnet, will not be accessible from the internet.
upvoted 1 times
hiroohiroo 1 month, 1 week ago
S3+VPCエンドポイント
upvoted 1 times
EFS is a serverless, fully elastic storage as mentioned below
https://aws.amazon.com/efs/
upvoted 1 times
Also, S3 is a blob storage service and there aren't any IOPS metric for S3 which inclines more towards EFS
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
Should not it B, according to ChatGPT?
Amazon EFS provides a serverless file storage option with high IOPS performance, which is suitable for the shared storage requirement of the SFTP service.
The AWS Transfer Family allows you to create an SFTP service with highly configurable security. By configuring a VPC endpoint with internet-facing access and attaching a security group that allows only trusted IP addresses, you can control access to the SFTP service.
By attaching an encrypted Amazon EFS volume to the SFTP service endpoint, you can ensure data at rest is encrypted, meeting the security requirements.
Granting users access to the SFTP service allows you to maintain control over user permissions, as user accounts are managed as Linux users within the SFTP servers.
upvoted 2 times
Option B is not the correct answer because it does not meet a serverless option
upvoted 1 times
Question #422 Topic 1
A company is developing a new machine learning (ML) model solution on AWS. The models are developed as independent microservices that fetch approximately 1 GB of model data from Amazon S3 at startup and load the data into memory. Users access the models through an
asynchronous API. Users can send a request or a batch of requests and specify where the results should be sent.
The company provides models to hundreds of users. The usage patterns for the models are irregular. Some models could be unused for days or weeks. Other models could receive batches of thousands of requests at a time.
Which design should a solutions architect recommend to meet these requirements?
Direct the requests from the API to a Network Load Balancer (NLB). Deploy the models as AWS Lambda functions that are invoked by the NLB.
Direct the requests from the API to an Application Load Balancer (ALB). Deploy the models as Amazon Elastic Container Service (Amazon ECS) services that read from an Amazon Simple Queue Service (Amazon SQS) queue. Use AWS App Mesh to scale the instances of the ECS cluster based on the SQS queue size.
Direct the requests from the API into an Amazon Simple Queue Service (Amazon SQS) queue. Deploy the models as AWS Lambda functions that are invoked by SQS events. Use AWS Auto Scaling to increase the number of vCPUs for the Lambda functions based on the SQS queue
size.
Direct the requests from the API into an Amazon Simple Queue Service (Amazon SQS) queue. Deploy the models as Amazon Elastic Container Service (Amazon ECS) services that read from the queue. Enable AWS Auto Scaling on Amazon ECS for both the cluster and copies of the service based on the queue size.
Community vote distribution
D (100%)
TariqKipkemei 3 weeks ago
For once examtopic answer is correct :) haha...
Batch requests/async = Amazon SQS Microservices = Amazon ECS
Workload variations = AWS Auto Scaling on Amazon ECS
upvoted 1 times
alexandercamachop 3 weeks, 3 days ago
D, no need for an App Load balancer like C says, no where in the text.
SQS is needed to ensure all request gets routed properly in a Microservices architecture and also that it waits until its picked up. ECS with Autoscaling, will scale based on the unknown pattern of usage as mentioned.
upvoted 1 times
anibinaadi 1 month ago
It is D
Refer https://aws.amazon.com/blogs/containers/amazon-elastic-container-service-ecs-auto-scaling-using-custom-metrics/ for additional information/knowledge.
upvoted 1 times
examtopictempacc 1 month, 1 week ago
asynchronous=SQS, microservices=ECS.
Use AWS Auto Scaling to adjust the number of ECS services.
upvoted 3 times
TariqKipkemei 3 weeks ago
good breakdown :)
upvoted 1 times
nosense 1 month, 2 weeks ago
because it is scalable, reliable, and efficient. C does not scale the models automatically
upvoted 3 times
Question #423 Topic 1
A solutions architect wants to use the following JSON text as an identity-based policy to grant specific permissions:
Which IAM principals can the solutions architect attach this policy to? (Choose two.)
Role
Group
Organization
Amazon Elastic Container Service (Amazon ECS) resource
Amazon EC2 resource
Community vote distribution
AB (100%)
nosense Highly Voted 1 month, 2 weeks ago
identity-based policy used for role and group
upvoted 5 times
TariqKipkemei Most Recent 3 weeks ago
Question #424 Topic 1
A company is running a custom application on Amazon EC2 On-Demand Instances. The application has frontend nodes that need to run 24 hours a day, 7 days a week and backend nodes that need to run only for a short time based on workload. The number of backend nodes varies during the day.
The company needs to scale out and scale in more instances based on workload. Which solution will meet these requirements MOST cost-effectively?
Use Reserved Instances for the frontend nodes. Use AWS Fargate for the backend nodes.
Use Reserved Instances for the frontend nodes. Use Spot Instances for the backend nodes.
Use Spot Instances for the frontend nodes. Use Reserved Instances for the backend nodes.
Use Spot Instances for the frontend nodes. Use AWS Fargate for the backend nodes.
Community vote distribution
B (100%)
TariqKipkemei 3 weeks ago
Option B will meet this requirement:
Frontend nodes that need to run 24 hours a day, 7 days a week = Reserved Instances Backend nodes run only for a short time = Spot Instances
upvoted 1 times
udo2020 3 weeks, 6 days ago
But Spot Instances are not based on workloads! Maybe it should be A...!?
upvoted 1 times
alvinnguyennexcel 1 month ago
Efren 1 month, 2 weeks ago
Agreed
upvoted 1 times
nosense 1 month, 2 weeks ago
Reserved+ spot . Fargate for serverless
upvoted 3 times
Question #425 Topic 1
A company uses high block storage capacity to runs its workloads on premises. The company's daily peak input and output transactions per second are not more than 15,000 IOPS. The company wants to migrate the workloads to Amazon EC2 and to provision disk performance
independent of storage capacity.
Which Amazon Elastic Block Store (Amazon EBS) volume type will meet these requirements MOST cost-effectively?
GP2 volume type
io2 volume type
GP3 volume type
io1 volume type
Community vote distribution
C (92%) 8%
alexandercamachop 3 weeks, 3 days ago
The GP3 (General Purpose SSD) volume type in Amazon Elastic Block Store (EBS) is the most cost-effective option for the given requirements. GP3 volumes offer a balance of price and performance and are suitable for a wide range of workloads, including those with moderate I/O needs.
GP3 volumes allow you to provision performance independently from storage capacity, which means you can adjust the baseline performance (measured in IOPS) and throughput (measured in MiB/s) separately from the volume size. This flexibility allows you to optimize your costs while meeting the workload requirements.
In this case, since the company's daily peak input and output transactions per second are not more than 15,000 IOPS, GP3 volumes provide a suitable and cost-effective option for their workloads.
upvoted 1 times
maver144 1 month ago
It is not C pals. The company wants to migrate the workloads to Amazon EC2 and to provision disk performance independent of storage capacity. With GP3 we have to increase storage capacity to increase IOPS over baseline.
You can only chose IOPS independetly with IO family and IO2 is in general better then IO1.
upvoted 1 times
Joselucho38 1 month ago
Therefore, the most suitable and cost-effective option in this scenario is the GP3 volume type (option C).
upvoted 1 times
Yadav_Sanjay 1 month, 1 week ago
Both GP2 and GP3 has max IOPS 16000 but GP3 is cost effective.
https://aws.amazon.com/blogs/storage/migrate-your-amazon-ebs-volumes-from-gp2-to-gp3-and-save-up-to-20-on-costs/
upvoted 3 times
Efren 1 month, 2 weeks ago
GPS3 allows 16000 IOPS
upvoted 3 times
nosense 1 month, 2 weeks ago
Gp3 $ 0.08 usd per gb Gp2 $ 0.10 usd per gb
upvoted 3 times
Question #426 Topic 1
A company needs to store data from its healthcare application. The application’s data frequently changes. A new regulation requires audit access at all levels of the stored data.
The company hosts the application on an on-premises infrastructure that is running out of storage capacity. A solutions architect must securely migrate the existing data to AWS while satisfying the new regulation.
Which solution will meet these requirements?
Use AWS DataSync to move the existing data to Amazon S3. Use AWS CloudTrail to log data events.
Use AWS Snowcone to move the existing data to Amazon S3. Use AWS CloudTrail to log management events.
Use Amazon S3 Transfer Acceleration to move the existing data to Amazon S3. Use AWS CloudTrail to log data events.
Use AWS Storage Gateway to move the existing data to Amazon S3. Use AWS CloudTrail to log management events.
Community vote distribution
A (73%) D (27%)
TariqKipkemei 2 weeks, 6 days ago
For a scenario where they want to maintain some/all of the data on prem then AWS Storage Gateway would be the option to offer hybrid cloud storage.
In this case they want to migrate all the data to the cloud so AWS Datasync is the best option.
upvoted 2 times
alexandercamachop 3 weeks, 3 days ago
Datasync, this way we can monitor and audit all of the data at all times.
With Snowcone / Snowball we lose access to audit the data while it arrives into AWS Data centers / Region / Availability Zone.
upvoted 1 times
alexandercamachop 3 weeks, 3 days ago
AWS DataSync is a data transfer service that simplifies and accelerates moving large amounts of data to and from AWS. It is designed to securely and efficiently migrate data from on-premises storage systems to AWS services like Amazon S3.
In this scenario, the company needs to securely migrate its healthcare application data to AWS while satisfying the new regulation for audit access. By using AWS DataSync, the existing data can be securely transferred to Amazon S3, ensuring the data is stored in a scalable and durable storage service.
Additionally, using AWS CloudTrail to log data events ensures that all access and activity related to the data stored in Amazon S3 is audited. This helps meet the regulatory requirement for audit access at all levels of the stored data.
upvoted 1 times
Felix_br 3 weeks, 4 days ago
DataSync can be used to backup data from one AWS storage service into another. Services such as Amazon S3 already has built-in tools for automatic data replication from one bucket to another. However, the replication only occurs for new data added to the bucket after the replication setting was turned on. So, is it possible to use datasync from onpremisse to aws ?
upvoted 2 times
omoakin 1 month ago
Use AWS Storage Gateway to move the existing data to Amazon S3. Use AWS CloudTrail to log management events.
upvoted 1 times
omoakin 1 month ago
BBBBBBBBBBBBBB
upvoted 1 times
omoakin 1 month ago
Sorry i meant D
upvoted 1 times
kanekichan 1 month, 1 week ago
A. Datasync = keyword = migrate/move
upvoted 1 times
EA100 1 month, 1 week ago
A. Use AWS DataSync to move the existing data to Amazon S3. Use AWS CloudTrail to log data events.
AWS DataSync is a service designed specifically for securely and efficiently transferring large amounts of data between on-premises storage systems and AWS services like Amazon S3. It provides a reliable and optimized way to migrate data while maintaining data integrity.
AWS CloudTrail, on the other hand, is a service that logs and monitors management events in your AWS account. While it can capture data events for certain services, its primary focus is on tracking management actions like API calls and configuration changes.
Therefore, using AWS DataSync to transfer the existing data to Amazon S3 and leveraging AWS CloudTrail to log data events aligns with the requirement of securely migrating the data and ensuring audit access at all levels, as specified by the new regulation.
upvoted 1 times
hiroohiroo 1 month, 1 week ago
cloudenthusiast 1 month, 1 week ago
A
AWS DataSync is a data transfer service that simplifies and accelerates moving large amounts of data between on-premises storage systems and Amazon S3. It provides secure and efficient data transfer while ensuring data integrity during the migration process.
By using AWS DataSync, you can securely transfer the data from the on-premises infrastructure to Amazon S3, meeting the requirement for securely migrating the data. Additionally, AWS CloudTrail can be used to log data events, allowing audit access at all levels of the stored data.
upvoted 1 times
Efren 1 month, 2 weeks ago
One time synch, its Data Sync. Dont bother for greyrose answers, they are usually wrong
upvoted 2 times
nosense 1 month, 2 weeks ago
Easy transfer data to s3 + encrypted
upvoted 2 times
greyrose 1 month, 2 weeks ago
DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD
upvoted 3 times
Question #427 Topic 1
A solutions architect is implementing a complex Java application with a MySQL database. The Java application must be deployed on Apache Tomcat and must be highly available.
What should the solutions architect do to meet these requirements?
A. Deploy the application in AWS Lambda. Configure an Amazon API Gateway API to connect with the Lambda functions.
B. Deploy the application by using AWS Elastic Beanstalk. Configure a load-balanced environment and a rolling deployment policy.
C. Migrate the database to Amazon ElastiCache. Configure the ElastiCache security group to allow access from the application.
D. Launch an Amazon EC2 instance. Install a MySQL server on the EC2 instance. Configure the application on the server. Create an AMI. Use the AMI to create a launch template with an Auto Scaling group.
Community vote distribution
B (100%)
TariqKipkemei 2 weeks, 6 days ago
antropaws 4 weeks, 1 day ago
cloudenthusiast 1 month, 1 week ago
B
AWS Elastic Beanstalk provides an easy and quick way to deploy, manage, and scale applications. It supports a variety of platforms, including Java and Apache Tomcat. By using Elastic Beanstalk, the solutions architect can upload the Java application and configure the environment to run Apache Tomcat.
upvoted 4 times
nosense 1 month, 2 weeks ago
Easy deploy, management and scale
upvoted 2 times
greyrose 1 month, 2 weeks ago
BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB
upvoted 1 times
Question #428 Topic 1
A serverless application uses Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. The Lambda function needs permissions to read and write to the DynamoDB table.
Which solution will give the Lambda function access to the DynamoDB table MOST securely?
A. Create an IAM user with programmatic access to the Lambda function. Attach a policy to the user that allows read and write access to the DynamoDB table. Store the access_key_id and secret_access_key parameters as part of the Lambda environment variables. Ensure that other AWS users do not have read and write access to the Lambda function configuration.
B. Create an IAM role that includes Lambda as a trusted service. Attach a policy to the role that allows read and write access to the DynamoDB table. Update the configuration of the Lambda function to use the new role as the execution role.
C. Create an IAM user with programmatic access to the Lambda function. Attach a policy to the user that allows read and write access to the DynamoDB table. Store the access_key_id and secret_access_key parameters in AWS Systems Manager Parameter Store as secure string parameters. Update the Lambda function code to retrieve the secure string parameters before connecting to the DynamoDB table.
D. Create an IAM role that includes DynamoDB as a trusted service. Attach a policy to the role that allows read and write access from the Lambda function. Update the code of the Lambda function to attach to the new role as an execution role.
Community vote distribution
B (100%)
antropaws 4 weeks, 1 day ago
omoakin 1 month ago
BBBBBBBBBB
upvoted 1 times
alvinnguyennexcel 1 month ago
vote B
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
B
Option B suggests creating an IAM role that includes Lambda as a trusted service, meaning the role is specifically designed for Lambda functions. The role should have a policy attached to it that grants the required read and write access to the DynamoDB table.
upvoted 2 times
nosense 1 month, 2 weeks ago
B is right
Role key word and trusted service lambda
upvoted 3 times
Question #429 Topic 1
The following IAM policy is attached to an IAM group. This is the only policy applied to the group.
What are the effective IAM permissions of this policy for group members?
A. Group members are permitted any Amazon EC2 action within the us-east-1 Region. Statements after the Allow permission are not applied.
B. Group members are denied any Amazon EC2 permissions in the us-east-1 Region unless they are logged in with multi-factor authentication (MFA).
C. Group members are allowed the ec2:StopInstances and ec2:TerminateInstances permissions for all Regions when logged in with multi-factor authentication (MFA). Group members are permitted any other Amazon EC2 action.
D. Group members are allowed the ec2:StopInstances and ec2:TerminateInstances permissions for the us-east-1 Region only when logged in with multi-factor authentication (MFA). Group members are permitted any other Amazon EC2 action within the us-east-1 Region.
Community vote distribution
D (100%)
jack79 1 week, 6 days ago came in exam today upvoted 1 times
TariqKipkemei 2 weeks, 6 days ago
antropaws 4 weeks, 1 day ago
D sounds about right.
upvoted 1 times
alvinnguyennexcel 1 month ago
omoakin 1 month, 1 week ago
D is correct
upvoted 1 times
nosense 1 month, 2 weeks ago
Question #430 Topic 1
A manufacturing company has machine sensors that upload .csv files to an Amazon S3 bucket. These .csv files must be converted into images and must be made available as soon as possible for the automatic generation of graphical reports.
The images become irrelevant after 1 month, but the .csv files must be kept to train machine learning (ML) models twice a year. The ML trainings and audits are planned weeks in advance.
Which combination of steps will meet these requirements MOST cost-effectively? (Choose two.)
Launch an Amazon EC2 Spot Instance that downloads the .csv files every hour, generates the image files, and uploads the images to the S3 bucket.
Design an AWS Lambda function that converts the .csv files into images and stores the images in the S3 bucket. Invoke the Lambda function when a .csv file is uploaded.
Create S3 Lifecycle rules for .csv files and image files in the S3 bucket. Transition the .csv files from S3 Standard to S3 Glacier 1 day after they are uploaded. Expire the image files after 30 days.
Create S3 Lifecycle rules for .csv files and image files in the S3 bucket. Transition the .csv files from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) 1 day after they are uploaded. Expire the image files after 30 days.
Create S3 Lifecycle rules for .csv files and image files in the S3 bucket. Transition the .csv files from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 1 day after they are uploaded. Keep the image files in Reduced Redundancy Storage (RRS).
Community vote distribution
BC (83%) BE (17%)
smartegnine 3 days, 18 hours ago
the key word is Weeks in advance, even you save data in S3 Gracia will also OK to take couples days to retrieve the data
upvoted 1 times
TariqKipkemei 2 weeks, 6 days ago
Abrar2022 3 weeks, 3 days ago
Wrong because Lifecycle rule is not mentioned.
CORRECT
CORRECT
Why Store on S3 One Zone-Infrequent Access (S3 One Zone-IA) when the files are going to irrelevant after 1 month? (Availability 99.99% -consider cost)
again, Why use Reduced Redundancy Storage (RRS) when the files are irrelevant after 1 month? (Availability 99.99% - consider cost)
upvoted 1 times
vesen22 1 month ago
RoroJ 1 month ago
B: Serverless and fast responding
E: will keep .csv file for a year, C and D expires the file after 30 days.
upvoted 2 times
RoroJ 1 month ago
B&C, misread the question, expires the image files after 30 days.
upvoted 1 times
hiroohiroo 1 month, 1 week ago
https://aws.amazon.com/jp/about-aws/whats-new/2021/11/amazon-s3-glacier-storage-class-amazon-s3-glacier-flexible-retrieval/
upvoted 2 times
nosense 1 month, 2 weeks ago
B severless and cost effective
C corrctl rule to store
upvoted 2 times
Question #431 Topic 1
A company has developed a new video game as a web application. The application is in a three-tier architecture in a VPC with Amazon RDS for MySQL in the database layer. Several players will compete concurrently online. The game’s developers want to display a top-10 scoreboard in near-real time and offer the ability to stop and restore the game while preserving the current scores.
What should a solutions architect do to meet these requirements?
Set up an Amazon ElastiCache for Memcached cluster to cache the scores for the web application to display.
Set up an Amazon ElastiCache for Redis cluster to compute and cache the scores for the web application to display.
Place an Amazon CloudFront distribution in front of the web application to cache the scoreboard in a section of the application.
Create a read replica on Amazon RDS for MySQL to run queries to compute the scoreboard and serve the read traffic to the web application.
Community vote distribution
B (100%)
haoAWS 4 days, 4 hours ago
B is correct
upvoted 1 times
jf_topics 2 weeks, 3 days ago
B correct.
upvoted 1 times
hiroohiroo 1 month, 1 week ago
https://aws.amazon.com/jp/blogs/news/building-a-real-time-gaming-leaderboard-with-amazon-elasticache-for-redis/
upvoted 3 times
cloudenthusiast 1 month, 1 week ago
Amazon ElastiCache for Redis is a highly scalable and fully managed in-memory data store. It can be used to store and compute the scores in real time for the top-10 scoreboard. Redis supports sorted sets, which can be used to store the scores as well as perform efficient queries to retrieve the top scores. By utilizing ElastiCache for Redis, the web application can quickly retrieve the current scores without the need to perform complex and potentially resource-intensive database queries.
upvoted 1 times
nosense 1 month, 1 week ago
Efren 1 month, 2 weeks ago
More questions!!!
upvoted 3 times
Question #432 Topic 1
An ecommerce company wants to use machine learning (ML) algorithms to build and train models. The company will use the models to visualize complex scenarios and to detect trends in customer data. The architecture team wants to integrate its ML models with a reporting platform to
analyze the augmented data and use the data directly in its business intelligence dashboards. Which solution will meet these requirements with the LEAST operational overhead?
Use AWS Glue to create an ML transform to build and train models. Use Amazon OpenSearch Service to visualize the data.
Use Amazon SageMaker to build and train models. Use Amazon QuickSight to visualize the data.
Use a pre-built ML Amazon Machine Image (AMI) from the AWS Marketplace to build and train models. Use Amazon OpenSearch Service to visualize the data.
Use Amazon QuickSight to build and train models by using calculated fields. Use Amazon QuickSight to visualize the data.
Community vote distribution
B (100%)
TariqKipkemei 2 weeks, 6 days ago
Business intelligence, visualiations = AmazonQuicksight ML = Amazon SageMaker
upvoted 1 times
antropaws 4 weeks, 1 day ago
omoakin 1 month, 1 week ago
Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy ML models quickly.
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
Amazon SageMaker is a fully managed service that provides a complete set of tools and capabilities for building, training, and deploying ML models. It simplifies the end-to-end ML workflow and reduces operational overhead by handling infrastructure provisioning, model training, and deployment.
To visualize the data and integrate it into business intelligence dashboards, Amazon QuickSight can be used. QuickSight is a cloud-native business intelligence service that allows users to easily create interactive visualizations, reports, and dashboards from various data sources, including the augmented data generated by the ML models.
upvoted 2 times
Efren 1 month, 1 week ago
nosense 1 month, 2 weeks ago
B sagemaker provide deploy ml models
upvoted 1 times
Question #433 Topic 1
A company is running its production and nonproduction environment workloads in multiple AWS accounts. The accounts are in an organization in AWS Organizations. The company needs to design a solution that will prevent the modification of cost usage tags.
Which solution will meet these requirements?
Create a custom AWS Config rule to prevent tag modification except by authorized principals.
Create a custom trail in AWS CloudTrail to prevent tag modification.
Create a service control policy (SCP) to prevent tag modification except by authorized principals.
Create custom Amazon CloudWatch logs to prevent tag modification.
Community vote distribution
C (100%)
TariqKipkemei 2 weeks, 1 day ago
Service control policies (SCPs) are a type of organization policy that you can use to manage permissions in your organization.
upvoted 1 times
alexandercamachop 3 weeks, 3 days ago
Anytime we need to restrict anything in an AWS Organization, it is SCP Policies.
upvoted 1 times
Abrar2022 3 weeks, 3 days ago
AWS Config is for tracking configuration changes
upvoted 1 times
Abrar2022 3 weeks, 3 days ago so it's wrong. Right asnwer is C upvoted 2 times
antropaws 4 weeks, 1 day ago
hiroohiroo 1 month, 1 week ago
nosense 1 month, 1 week ago
Question #434 Topic 1
A company hosts its application in the AWS Cloud. The application runs on Amazon EC2 instances behind an Elastic Load Balancer in an Auto
Scaling group and with an Amazon DynamoDB table. The company wants to ensure the application can be made available in anotherAWS Region with minimal downtime.
What should a solutions architect do to meet these requirements with the LEAST amount of downtime?
Create an Auto Scaling group and a load balancer in the disaster recovery Region. Configure the DynamoDB table as a global table. Configure DNS failover to point to the new disaster recovery Region's load balancer.
Create an AWS CloudFormation template to create EC2 instances, load balancers, and DynamoDB tables to be launched when needed Configure DNS failover to point to the new disaster recovery Region's load balancer.
Create an AWS CloudFormation template to create EC2 instances and a load balancer to be launched when needed. Configure the DynamoDB table as a global table. Configure DNS failover to point to the new disaster recovery Region's load balancer.
Create an Auto Scaling group and load balancer in the disaster recovery Region. Configure the DynamoDB table as a global table. Create an Amazon CloudWatch alarm to trigger an AWS Lambda function that updates Amazon Route 53 pointing to the disaster recovery load balancer.
Community vote distribution
A (58%) C (25%) D (17%)
Wablo 1 week, 3 days ago
Both Option A and Option D include the necessary steps of setting up an Auto Scaling group and load balancer in the disaster recovery Region, configuring the DynamoDB table as a global table, and updating DNS records. However, Option D provides a more detailed approach by explicitly mentioning the use of an Amazon CloudWatch alarm and AWS Lambda function to automate the DNS update process.
By leveraging an Amazon CloudWatch alarm, Option D allows for an automated failover mechanism. When triggered, the CloudWatch alarm can execute an AWS Lambda function, which in turn can update the DNS records in Amazon Route 53 to redirect traffic to the disaster recovery load balancer in the new Region. This automation helps reduce the potential for human error and further minimizes downtime.
Answer is D
upvoted 1 times
TariqKipkemei 2 weeks, 1 day ago
The company wants to ensure the application 'CAN' be made available in another AWS Region with minimal downtime. Meaning they want to be able to launch infra on need basis.
Best answer is C.
upvoted 1 times
dajform 5 days, 15 hours ago
B, C are not OK because "launching resources when needed", which will increase the time to recover "DR"
upvoted 1 times
Wablo 1 week, 3 days ago
minimal downtme not minimal effort!
D
upvoted 1 times
AshishRocks 3 weeks, 4 days ago
I feel it is A
Configure DNS failover: Use DNS failover to point the application's DNS record to the load balancer in the disaster recovery Region. DNS failover allows you to route traffic to the disaster recovery Region in case of a failure in the primary Region.
upvoted 2 times
Wablo 1 week, 3 days ago
Once you configure manually the DNS , its no more automated like Lambda does.
upvoted 1 times
lucdt4 1 month ago
A and D is correct.
But Route 53 haves a feature DNS failover when instances down so we dont need use Cloudwatch and lambda to trigger
-> A correct
upvoted 4 times
smartegnine 3 days, 17 hours ago
Did not see Route 53 in this question right? So my opinion is D
upvoted 1 times
Wablo 1 week, 3 days ago
Yes it does but you configure it. Its not automated anymore. D is the best answer!
upvoted 1 times
Yadav_Sanjay 1 month, 1 week ago
hiroohiroo 1 month, 1 week ago
AがDNS フェイルオーバー
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
A
By configuring the DynamoDB table as a global table, you can replicate the table data across multiple AWS Regions, including the primary Region and the disaster recovery Region. This ensures that data is available in both Regions and can be seamlessly accessed during a failover event.
upvoted 1 times
Efren 1 month, 1 week ago
A for ME, DNs should failover
upvoted 2 times
nosense 1 month, 2 weeks ago
Macosxfan 1 month, 1 week ago
I would pick A
upvoted 1 times
nosense 1 month, 1 week ago Misunderstanding. Only A valid upvoted 2 times
Efren 1 month, 1 week ago
I would go for A. If we have DNS failover, why to burden with lambda updating the DNS records?
upvoted 1 times
Question #435 Topic 1
A company needs to migrate a MySQL database from its on-premises data center to AWS within 2 weeks. The database is 20 TB in size. The company wants to complete the migration with minimal downtime.
Which solution will migrate the database MOST cost-effectively?
Order an AWS Snowball Edge Storage Optimized device. Use AWS Database Migration Service (AWS DMS) with AWS Schema Conversion Tool (AWS SCT) to migrate the database with replication of ongoing changes. Send the Snowball Edge device to AWS to finish the migration and continue the ongoing replication.
Order an AWS Snowmobile vehicle. Use AWS Database Migration Service (AWS DMS) with AWS Schema Conversion Tool (AWS SCT) to migrate the database with ongoing changes. Send the Snowmobile vehicle back to AWS to finish the migration and continue the ongoing replication.
Order an AWS Snowball Edge Compute Optimized with GPU device. Use AWS Database Migration Service (AWS DMS) with AWS Schema Conversion Tool (AWS SCT) to migrate the database with ongoing changes. Send the Snowball device to AWS to finish the migration and continue the ongoing replication
Order a 1 GB dedicated AWS Direct Connect connection to establish a connection with the data center. Use AWS Database Migration Service (AWS DMS) with AWS Schema Conversion Tool (AWS SCT) to migrate the database with replication of ongoing changes.
Community vote distribution
A (74%) D (26%)
MrAWSAssociate 5 days, 16 hours ago
DrWatson 3 weeks, 2 days ago
RoroJ 1 month ago
D Direct Connection will need a long time to setup plus need to deal with Network and Security changes with existing environment. Ad then plus the Data trans time... No way can be done in 2 weeks.
upvoted 3 times
Joselucho38 1 month ago
Overall, option D combines the reliability and cost-effectiveness of AWS Direct Connect, AWS DMS, and AWS SCT to migrate the database efficiently and minimize downtime.
upvoted 2 times
Abhineet9148232 1 month ago
D - Direct Connect takes atleast a month to setup! Requirement is for within 2 weeks.
upvoted 3 times
Rob1L 1 month, 1 week ago
AWS Snowball Edge Storage Optimized device is used for large-scale data transfers, but the lead time for delivery, data transfer, and return shipping would likely exceed the 2-week time frame. Also, ongoing database changes wouldn't be replicated while the device is in transit.
upvoted 1 times
Rob1L 1 month, 1 week ago
Change to A because "Most cost effective"
upvoted 2 times
hiroohiroo 1 month, 1 week ago
https://docs.aws.amazon.com/ja_jp/snowball/latest/developer-guide/device-differences.html#device-options Aです。
upvoted 2 times
norris81 1 month, 1 week ago
How long does direct connect take to provision ?
upvoted 2 times
examtopictempacc 1 month, 1 week ago At least one month and expensive. upvoted 1 times
nosense 1 month, 1 week ago
A) 300 first 10 days. 150 shipping
D) 750 for 2 weeks
upvoted 3 times
Efren 1 month, 1 week ago
Thanks, i was checking the speed more than price. Thanks for the clarification
upvoted 1 times
Efren 1 month, 1 week ago
20 TB 1G/S would take around 44 hours. I guess it takes less than snow devices to receive and send it back
upvoted 1 times
Efren 1 month, 1 week ago
Wrong myself, i was checking time, but not price
upvoted 1 times
Question #436 Topic 1
A company moved its on-premises PostgreSQL database to an Amazon RDS for PostgreSQL DB instance. The company successfully launched a new product. The workload on the database has increased. The company wants to accommodate the larger workload without adding
infrastructure.
Which solution will meet these requirements MOST cost-effectively?
A. Buy reserved DB instances for the total workload. Make the Amazon RDS for PostgreSQL DB instance larger.
B. Make the Amazon RDS for PostgreSQL DB instance a Multi-AZ DB instance.
C. Buy reserved DB instances for the total workload. Add another Amazon RDS for PostgreSQL DB instance.
D. Make the Amazon RDS for PostgreSQL DB instance an on-demand DB instance.
Community vote distribution
A (100%)
elmogy 1 month ago
A.
"without adding infrastructure" means scaling vertically and choosing larger instance. "MOST cost-effectively" reserved instances
upvoted 4 times
examtopictempacc 1 month, 1 week ago
A.
Not C: without adding infrastructure
upvoted 2 times
EA100 1 month, 1 week ago
Answer - C
Option B, making the Amazon RDS for PostgreSQL DB instance a Multi-AZ DB instance, would provide high availability and fault tolerance but may not directly address the need for increased capacity to handle the larger workload.
Therefore, the recommended solution is Option C: Buy reserved DB instances for the workload and add another Amazon RDS for PostgreSQL DB instance to accommodate the increased workload in a cost-effective manner.
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
C
Option C: buying reserved DB instances for the total workload and adding another Amazon RDS for PostgreSQL DB instance seems to be the most appropriate choice. It allows for workload distribution across multiple instances, providing scalability and potential performance improvements.
Additionally, reserved instances can provide cost savings in the long term.
upvoted 1 times
nosense 1 month, 1 week ago
A for me, because without adding additional infrastructure
upvoted 3 times
th3k33n 1 month, 2 weeks ago
Should be C
upvoted 1 times
Efren 1 month, 1 week ago
That would add more infraestructure. A would increase the size, keeping the number of instances, i think
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
Option A involves making the existing Amazon RDS for PostgreSQL DB instance larger. While this can improve performance, it may not be sufficient to handle a significantly increased workload. It also doesn't distribute the workload or provide scalability.
upvoted 1 times
nosense 1 month, 1 week ago
The main not HA, cost-effectively and without adding infrastructure
upvoted 1 times
omoakin 1 month ago
A is the best
upvoted 1 times
Question #437 Topic 1
A company operates an ecommerce website on Amazon EC2 instances behind an Application Load Balancer (ALB) in an Auto Scaling group. The site is experiencing performance issues related to a high request rate from illegitimate external systems with changing IP addresses. The security team is worried about potential DDoS attacks against the website. The company must block the illegitimate incoming requests in a way that has a minimal impact on legitimate users.
What should a solutions architect recommend?
A. Deploy Amazon Inspector and associate it with the ALB.
B. Deploy AWS WAF, associate it with the ALB, and configure a rate-limiting rule.
C. Deploy rules to the network ACLs associated with the ALB to block the incomingtraffic.
D. Deploy Amazon GuardDuty and enable rate-limiting protection when configuring GuardDuty.
Community vote distribution
B (93%) 7%
samehpalass 1 week, 1 day ago
As no shield protect here so WAF rate limit
upvoted 1 times
TariqKipkemei 2 weeks ago
B in swahili 'ba' :)
external systems, incoming requests = AWS WAF
upvoted 1 times
Axeashes 2 weeks ago
layer 7 DDoS protection with WAF https://docs.aws.amazon.com/waf/latest/developerguide/ddos-get-started-web-acl-rbr.html
upvoted 1 times
antropaws 3 weeks, 2 days ago
Joselucho38 1 month ago
AWS WAF (Web Application Firewall) is a service that provides protection for web applications against common web exploits. By associating AWS WAF with the Application Load Balancer (ALB), you can inspect incoming traffic and define rules to allow or block requests based on various criteria.
upvoted 4 times
cloudenthusiast 1 month, 1 week ago
B
AWS Web Application Firewall (WAF) is a service that helps protect web applications from common web exploits and provides advanced security features. By deploying AWS WAF and associating it with the ALB, the company can set up rules to filter and block incoming requests based on specific criteria, such as IP addresses.
In this scenario, the company is facing performance issues due to a high request rate from illegitimate external systems with changing IP addresses. By configuring a rate-limiting rule in AWS WAF, the company can restrict the number of requests coming from each IP address, preventing excessive traffic from overwhelming the website. This will help mitigate the impact of potential DDoS attacks and ensure that legitimate users can access the site without interruption.
upvoted 3 times
Efren 1 month, 1 week ago
If not AWS Shield, then WAF
upvoted 3 times
nosense 1 month, 1 week ago
Efren 1 month, 1 week ago
My mind slipped with AWS Shield. GuardDuty can be working along with WAF for DDOS attack, but ultimately would be WAF
https://aws.amazon.com/blogs/security/how-to-use-amazon-guardduty-and-aws-web-application-firewall-to-automatically-block-suspicious-hosts/
upvoted 1 times
Efren 1 month, 1 week ago
D, Guard Duty for me
upvoted 1 times
Question #438 Topic 1
A company wants to share accounting data with an external auditor. The data is stored in an Amazon RDS DB instance that resides in a private subnet. The auditor has its own AWS account and requires its own copy of the database.
What is the MOST secure way for the company to share the database with the auditor?
A. Create a read replica of the database. Configure IAM standard database authentication to grant the auditor access.
B. Export the database contents to text files. Store the files in an Amazon S3 bucket. Create a new IAM user for the auditor. Grant the user access to the S3 bucket.
C. Copy a snapshot of the database to an Amazon S3 bucket. Create an IAM user. Share the user's keys with the auditor to grant access to the object in the S3 bucket.
D. Create an encrypted snapshot of the database. Share the snapshot with the auditor. Allow access to the AWS Key Management Service (AWS KMS) encryption key.
Community vote distribution
D (100%)
antropaws 3 weeks, 2 days ago
alexandercamachop 3 weeks, 3 days ago
The most secure way for the company to share the database with the auditor is option D: Create an encrypted snapshot of the database, share the snapshot with the auditor, and allow access to the AWS Key Management Service (AWS KMS) encryption key.
By creating an encrypted snapshot, the company ensures that the database data is protected at rest. Sharing the encrypted snapshot with the auditor allows them to have their own copy of the database securely.
In addition, granting access to the AWS KMS encryption key ensures that the auditor has the necessary permissions to decrypt and access the encrypted snapshot. This allows the auditor to restore the snapshot and access the data securely.
This approach provides both data protection and access control, ensuring that the database is securely shared with the auditor while maintaining the confidentiality and integrity of the data.
upvoted 3 times
TariqKipkemei 2 weeks ago best explanation ever upvoted 1 times
cloudenthusiast 1 month, 1 week ago
Option D (Creating an encrypted snapshot of the database, sharing the snapshot, and allowing access to the AWS Key Management Service encryption key) is generally considered a better option for sharing the database with the auditor in terms of security and control.
upvoted 1 times
nosense 1 month, 1 week ago
Question #439 Topic 1
A solutions architect configured a VPC that has a small range of IP addresses. The number of Amazon EC2 instances that are in the VPC is increasing, and there is an insufficient number of IP addresses for future workloads.
Which solution resolves this issue with the LEAST operational overhead?
Add an additional IPv4 CIDR block to increase the number of IP addresses and create additional subnets in the VPC. Create new resources in the new subnets by using the new CIDR.
Create a second VPC with additional subnets. Use a peering connection to connect the second VPC with the first VPC Update the routes and create new resources in the subnets of the second VPC.
Use AWS Transit Gateway to add a transit gateway and connect a second VPC with the first VPUpdate the routes of the transit gateway and VPCs. Create new resources in the subnets of the second VPC.
Create a second VPC. Create a Site-to-Site VPN connection between the first VPC and the second VPC by using a VPN-hosted solution on
Amazon EC2 and a virtual private gateway. Update the route between VPCs to the traffic through the VPN. Create new resources in the subnets of the second VPC.
Community vote distribution
A (100%)
TariqKipkemei 2 weeks ago
antropaws 3 weeks, 2 days ago
A is correct: You assign a single CIDR IP address range as the primary CIDR block when you create a VPC and can add up to four secondary CIDR blocks after creation of the VPC.
upvoted 2 times
Yadav_Sanjay 1 month, 1 week ago
Add additional CIDR of bigger range
upvoted 2 times
Efren 1 month, 1 week ago
Add new bigger subnets
upvoted 2 times
nosense 1 month, 1 week ago
A valid
upvoted 1 times
Question #440 Topic 1
A company used an Amazon RDS for MySQL DB instance during application testing. Before terminating the DB instance at the end of the test cycle, a solutions architect created two backups. The solutions architect created the first backup by using the mysqldump utility to create a database dump. The solutions architect created the second backup by enabling the final DB snapshot option on RDS termination.
The company is now planning for a new test cycle and wants to create a new DB instance from the most recent backup. The company has chosen a MySQL-compatible edition ofAmazon Aurora to host the DB instance.
Which solutions will create the new DB instance? (Choose two.)
Import the RDS snapshot directly into Aurora.
Upload the RDS snapshot to Amazon S3. Then import the RDS snapshot into Aurora.
Upload the database dump to Amazon S3. Then import the database dump into Aurora.
Use AWS Database Migration Service (AWS DMS) to import the RDS snapshot into Aurora.
Upload the database dump to Amazon S3. Then use AWS Database Migration Service (AWS DMS) to import the database dump into Aurora.
Community vote distribution
AC (77%) BC (15%) 8%
Axaus Highly Voted 1 month, 1 week ago
A,C
A because the snapshot is already stored in AWS.
C because you dont need a migration tool going from MySQL to MySQL. You would use the MySQL utility.
upvoted 5 times
marufxplorer Most Recent 1 week, 3 days ago
CE
Since the backup created by the solutions architect was a database dump using the mysqldump utility, it cannot be directly imported into Aurora using RDS snapshots. Amazon Aurora has its own specific backup format that is different from RDS snapshots
upvoted 1 times
antropaws 3 weeks, 2 days ago
Migrating data from MySQL by using an Amazon S3 bucket
You can copy the full and incremental backup files from your source MySQL version 5.7 database to an Amazon S3 bucket, and then restore an Amazon Aurora MySQL DB cluster from those files.
This option can be considerably faster than migrating data using mysqldump, because using mysqldump replays all of the commands to recreate the schema and data from your source database in your new Aurora MySQL DB cluster.
By copying your source MySQL data files, Aurora MySQL can immediately use those files as the data for an Aurora MySQL DB cluster. https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Migrating.ExtMySQL.html
upvoted 1 times
oras2023 1 month ago
omoakin 1 month, 1 week ago
BE
Upload the RDS snapshot to Amazon S3. Then import the RDS snapshot into Aurora.
Upload the database dump to Amazon S3. Then use AWS Database Migration Service (AWS DMS) to import the database dump into Aurora
upvoted 1 times
Efren 1 month, 1 week ago
Id say B and C
You can create a dump of your data using the mysqldump utility, and then import that data into an existing Amazon Aurora MySQL DB cluster.
c>- Because Amazon Aurora MySQL is a MySQL-compatible database, you can use the mysqldump utility to copy data from your MySQL or MariaDB database to an existing Amazon Aurora MySQL DB cluster.
B.- You can copy the source files from your source MySQL version 5.5, 5.6, or 5.7 database to an Amazon S3 bucket, and then restore an Amazon Aurora MySQL DB cluster from those files.
upvoted 2 times
nosense 1 month, 2 weeks ago
Rds required upload to s3
upvoted 1 times
nosense 1 month, 1 week ago
in the end, apparently the A and C.
because it creates a new DB
no sense to load in s3. can directly
yes, creates a new inst d and e migration
upvoted 1 times
nosense 1 month, 2 weeks ago
If too be honestly can't decide between be and bc...
upvoted 1 times
Question #441 Topic 1
A company hosts a multi-tier web application on Amazon Linux Amazon EC2 instances behind an Application Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. The company observes that the Auto Scaling group launches more On-Demand Instances when the application's end users access high volumes of static web content. The company wants to optimize cost.
What should a solutions architect do to redesign the application MOST cost-effectively?
A. Update the Auto Scaling group to use Reserved Instances instead of On-Demand Instances.
B. Update the Auto Scaling group to scale by launching Spot Instances instead of On-Demand Instances.
C. Create an Amazon CloudFront distribution to host the static web contents from an Amazon S3 bucket.
D. Create an AWS Lambda function behind an Amazon API Gateway API to host the static website contents.
Community vote distribution
C (100%)
TariqKipkemei 2 weeks ago
static web content = Amazon CloudFront
upvoted 1 times
alexandercamachop 3 weeks, 3 days ago
Static Web Content = S3 Always.
CloudFront = Closer to the users locations since it will cache in the Edge nodes.
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
By leveraging Amazon CloudFront, you can cache and serve the static web content from edge locations worldwide, reducing the load on your EC2 instances. This can help lower the number of On-Demand Instances required to handle high volumes of static web content requests. Storing the static content in an Amazon S3 bucket and using CloudFront as a content delivery network (CDN) improves performance and reduces costs by reducing the load on your EC2 instances.
upvoted 2 times
Efren 1 month, 1 week ago
Static content, cloudFront plus S3
upvoted 2 times
nosense 1 month, 1 week ago
Question #442 Topic 1
A company stores several petabytes of data across multiple AWS accounts. The company uses AWS Lake Formation to manage its data lake. The company's data science team wants to securely share selective data from its accounts with the company's engineering team for analytical purposes.
Which solution will meet these requirements with the LEAST operational overhead?
A. Copy the required data to a common account. Create an IAM access role in that account. Grant access by specifying a permission policy that includes users from the engineering team accounts as trusted entities.
B. Use the Lake Formation permissions Grant command in each account where the data is stored to allow the required engineering team users to access the data.
C. Use AWS Data Exchange to privately publish the required data to the required engineering team accounts.
D. Use Lake Formation tag-based access control to authorize and grant cross-account permissions for the required data to the engineering team accounts.
Community vote distribution
D (100%)
cloudenthusiast Highly Voted 1 month, 1 week ago
By utilizing Lake Formation's tag-based access control, you can define tags and tag-based policies to grant selective access to the required data for the engineering team accounts. This approach allows you to control access at a granular level without the need to copy or move the data to a common account or manage permissions individually in each account. It provides a centralized and scalable solution for securely sharing data across accounts with minimal operational overhead.
upvoted 7 times
luisgu Most Recent 1 month, 1 week ago
https://aws.amazon.com/blogs/big-data/securely-share-your-data-across-aws-accounts-using-aws-lake-formation/
upvoted 2 times
Question #443 Topic 1
A company wants to host a scalable web application on AWS. The application will be accessed by users from different geographic regions of the world. Application users will be able to download and upload unique data up to gigabytes in size. The development team wants a cost-effective solution to minimize upload and download latency and maximize performance.
What should a solutions architect do to accomplish this?
A. Use Amazon S3 with Transfer Acceleration to host the application.
B. Use Amazon S3 with CacheControl headers to host the application.
C. Use Amazon EC2 with Auto Scaling and Amazon CloudFront to host the application.
D. Use Amazon EC2 with Auto Scaling and Amazon ElastiCache to host the application.
Community vote distribution
A (89%) 11%
Zuit 2 days, 7 hours ago
Pretty tricky question:
A seems right for the up and download: however, first sentence mentions: "hosting a web application on AWS" -> S3 is alright for static content, but for the web app we should prefer a compute service like EC2.
upvoted 1 times
TariqKipkemei 2 weeks ago
A fits this scenario
upvoted 1 times
alexandercamachop 3 weeks, 3 days ago
Amazon S3 (Simple Storage Service) is a highly scalable object storage service provided by AWS. It allows you to store and retrieve any amount of data from anywhere on the web. With Amazon S3, you can host static websites, store and deliver large media files, and manage data for backup and restore.
Transfer Acceleration is a feature of Amazon S3 that utilizes the AWS global infrastructure to accelerate file transfers to and from Amazon S3. It uses optimized network paths and parallelization techniques to speed up data transfer, especially for large files and over long distances.
By using Amazon S3 with Transfer Acceleration, the web application can benefit from faster upload and download speeds, reducing latency and improving overall performance for users in different geographic regions. This solution is cost-effective as it leverages the existing Amazon S3 infrastructure and eliminates the need for additional compute resources.
upvoted 1 times
Abrar2022 3 weeks, 3 days ago
How on earth is it C?
Transfer Acceleration is for optimizing file transfers to and from Amazon S3, whereas Amazon CloudFront is bringing content closer to the end user.
I feel good knowing that most of the people here are new to AWS.
upvoted 1 times
omoakin 1 month ago
CCCCCCCCCCC
Use Amazon EC2 with Auto Scaling and Amazon CloudFront to host the application
upvoted 2 times
EA100 1 month, 1 week ago
Answer - C
C. Use Amazon EC2 with Auto Scaling and Amazon CloudFront to host the application.
Using Amazon EC2 with Auto Scaling allows for scalability and the ability to handle varying levels of demand for the web application. Auto Scaling ensures that the appropriate number of EC2 instances are provisioned based on the workload, enabling efficient resource utilization and cost optimization.
Amazon CloudFront can be used as a content delivery network (CDN) to cache and deliver static and dynamic content closer to the end users, reducing latency and improving performance. By leveraging CloudFront, the web application can benefit from faster content delivery to users in
different geographic regions.
So, option C is the correct choice in this situation to minimize latency, maximize performance, and achieve cost-effectiveness.
upvoted 2 times
hiroohiroo 1 month, 1 week ago
https://docs.aws.amazon.com/ja_jp/AmazonS3/latest/userguide/transfer-acceleration.html
upvoted 2 times
karbob 4 weeks, 1 day ago
Use Amazon S3 with Transfer Acceleration is not the best choice because Amazon S3 is primarily a storage service and may not be optimized for hosting dynamic web applications.
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
Since S3 Transfer Acceleration is leveraging CloudFront's global network of edge location so C is not needed.
upvoted 2 times
karbob 4 weeks, 1 day ago
Transfer Acceleration is focused on optimizing file transfers to and from Amazon S3, whereas Auto Scaling with Amazon CloudFront is a more suitable combination for hosting a scalable web application with global accessibility.
upvoted 1 times
Efren 1 month, 1 week ago
S3 Transfer acceleration is precisely for this. agreed with nosense
upvoted 2 times
nosense 1 month, 1 week ago
i WILL Go with A.
upvoted 2 times
Question #444 Topic 1
A company has hired a solutions architect to design a reliable architecture for its application. The application consists of one Amazon RDS DB
instance and two manually provisioned Amazon EC2 instances that run web servers. The EC2 instances are located in a single Availability Zone.
An employee recently deleted the DB instance, and the application was unavailable for 24 hours as a result. The company is concerned with the overall reliability of its environment.
What should the solutions architect do to maximize reliability of the application's infrastructure?
Delete one EC2 instance and enable termination protection on the other EC2 instance. Update the DB instance to be Multi-AZ, and enable deletion protection.
Update the DB instance to be Multi-AZ, and enable deletion protection. Place the EC2 instances behind an Application Load Balancer, and run them in an EC2 Auto Scaling group across multiple Availability Zones.
Create an additional DB instance along with an Amazon API Gateway and an AWS Lambda function. Configure the application to invoke the Lambda function through API Gateway. Have the Lambda function write the data to the two DB instances.
Place the EC2 instances in an EC2 Auto Scaling group that has multiple subnets located in multiple Availability Zones. Use Spot Instances instead of On-Demand Instances. Set up Amazon CloudWatch alarms to monitor the health of the instances Update the DB instance to be Multi-AZ, and enable deletion protection.
Community vote distribution
B (100%)
TariqKipkemei 2 weeks ago
Update the DB instance to be Multi-AZ, and enable deletion protection. Place the EC2 instances behind an Application Load Balancer, and run them in an EC2 Auto Scaling group across multiple Availability Zones
upvoted 1 times
antropaws 3 weeks, 2 days ago
alexandercamachop 3 weeks, 3 days ago
It is the only one with High Availability. Amazon RDS with Multi AZ
EC2 with Auto Scaling Group in Multi Az
upvoted 1 times
omoakin 1 month, 1 week ago
same question from
https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-associate-saa-c02/ long time ago and still same option B
upvoted 1 times
nosense 1 month, 1 week ago
B is correct. HA ensured by DB in Mutli-AZ and EC2 in AG
upvoted 4 times
Question #445 Topic 1
A company is storing 700 terabytes of data on a large network-attached storage (NAS) system in its corporate data center. The company has a hybrid environment with a 10 Gbps AWS Direct Connect connection.
After an audit from a regulator, the company has 90 days to move the data to the cloud. The company needs to move the data efficiently and without disruption. The company still needs to be able to access and update the data during the transfer window.
Which solution will meet these requirements?
Create an AWS DataSync agent in the corporate data center. Create a data transfer task Start the transfer to an Amazon S3 bucket.
Back up the data to AWS Snowball Edge Storage Optimized devices. Ship the devices to an AWS data center. Mount a target Amazon S3 bucket on the on-premises file system.
Use rsync to copy the data directly from local storage to a designated Amazon S3 bucket over the Direct Connect connection.
Back up the data on tapes. Ship the tapes to an AWS data center. Mount a target Amazon S3 bucket on the on-premises file system.
Community vote distribution
A (100%)
TariqKipkemei 1 week, 6 days ago
AWS DataSync is a secure, online service that automates and accelerates moving data between on premises and AWS Storage services.
upvoted 1 times
wRhlH 2 weeks, 2 days ago
For those who wonders why not B. Snowball Edge Storage Optimized device for data transfer is up to 100TB https://docs.aws.amazon.com/snowball/latest/developer-guide/device-differences.html
upvoted 1 times
smartegnine 3 days, 16 hours ago
10GBs * 24*60*60 =864,000 GB estimate around 864 TB a day, 2 days will transfer all data. But for snowball at least 4 days for delivery to the data center.
upvoted 1 times
omoakin 1 month, 1 week ago
A
https://www.examtopics.com/discussions/amazon/view/46492-exam-aws-certified-solutions-architect-associate-saa-c02/#:~:text=Exam%20question%20from,Question%20%23%3A%20385
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
By leveraging AWS DataSync in combination with AWS Direct Connect, the company can efficiently and securely transfer its 700 terabytes of data to an Amazon S3 bucket without disruption. The solution allows continued access and updates to the data during the transfer window, ensuring business continuity throughout the migration process.
upvoted 2 times
nosense 1 month, 1 week ago
A for me, bcs egde storage up to 100tb
upvoted 4 times
Question #446 Topic 1
A company stores data in PDF format in an Amazon S3 bucket. The company must follow a legal requirement to retain all new and existing data in Amazon S3 for 7 years.
Which solution will meet these requirements with the LEAST operational overhead?
Turn on the S3 Versioning feature for the S3 bucket. Configure S3 Lifecycle to delete the data after 7 years. Configure multi-factor authentication (MFA) delete for all S3 objects.
Turn on S3 Object Lock with governance retention mode for the S3 bucket. Set the retention period to expire after 7 years. Recopy all existing objects to bring the existing data into compliance.
Turn on S3 Object Lock with compliance retention mode for the S3 bucket. Set the retention period to expire after 7 years. Recopy all existing objects to bring the existing data into compliance.
Turn on S3 Object Lock with compliance retention mode for the S3 bucket. Set the retention period to expire after 7 years. Use S3 Batch Operations to bring the existing data into compliance.
Community vote distribution
D (90%) 10%
MrAWSAssociate 1 week, 4 days ago
To replicate existing object/data in S3 Bucket to bring them to compliance, optionally we use "S3 Batch Replication", so option D is the most appropriate, especially if we have big data in S3.
upvoted 1 times
TariqKipkemei 1 week, 6 days ago
For minimum ops D is best
upvoted 1 times
DrWatson 3 weeks, 2 days ago
https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-retention-date.html
upvoted 1 times
antropaws 3 weeks, 2 days ago
Batch operations will add operational overhead.
upvoted 1 times
Abrar2022 3 weeks, 3 days ago
Use Object Lock in Compliance mode. Then Use Batch operation.
WRONG>>manual work and not automated>>>Recopy all existing objects to bring the existing data into compliance.
upvoted 1 times
omoakin 1 month, 1 week ago
C
When an object is locked in compliance mode, its retention mode can't be changed, and its retention period can't be shortened. Compliance mode helps ensure that an object version can't be overwritten or deleted for the duration of the retention period.
upvoted 2 times
lucdt4 1 month ago
No, D for me because the requirement is LEAST operational overhead So RECOPy is the manual operation -> C is wrong
D is correct
upvoted 2 times
omoakin 1 month, 1 week ago
error i meant to type D i wont do recopy
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
Recopying vs. S3 Batch Operations: In Option C, the recommendation is to recopy all existing objects to ensure they have the appropriate retention settings. This can be done using simple S3 copy operations. On the other hand, Option D suggests using S3 Batch Operations, which is a more advanced feature and may require additional configuration and management. S3 Batch Operations can be beneficial if you have a massive number of objects and need to perform complex operations, but it might introduce more overhead for this specific use case.
Operational complexity: Option C has a straightforward process of recopying existing objects. It is a well-known operation in S3 and doesn't require additional setup or management. Option D introduces the need to set up and configure S3 Batch Operations, which can involve creating job definitions, specifying job parameters, and monitoring the progress of batch operations. This additional complexity may increase the operational overhead.
upvoted 1 times
Efren 1 month, 1 week ago
You need AWS Batch to re-apply certain config to files that were already in S3, like encryption
upvoted 4 times
nosense 1 month, 1 week ago
D for me, bcs no sense to recopy all data
upvoted 2 times
cloudenthusiast 1 month, 1 week ago But D will introduce operation overhead upvoted 1 times
Question #447 Topic 1
A company has a stateless web application that runs on AWS Lambda functions that are invoked by Amazon API Gateway. The company wants to deploy the application across multiple AWS Regions to provide Regional failover capabilities.
What should a solutions architect do to route traffic to multiple Regions?
Create Amazon Route 53 health checks for each Region. Use an active-active failover configuration.
Create an Amazon CloudFront distribution with an origin for each Region. Use CloudFront health checks to route traffic.
Create a transit gateway. Attach the transit gateway to the API Gateway endpoint in each Region. Configure the transit gateway to route requests.
Create an Application Load Balancer in the primary Region. Set the target group to point to the API Gateway endpoint hostnames in each Region.
Community vote distribution
A (70%) B (30%)
MrAWSAssociate 1 week, 4 days ago
A option does make sense.
upvoted 1 times
Sangsation 1 week, 6 days ago
By creating an Amazon CloudFront distribution with origins in each AWS Region where the application is deployed, you can leverage CloudFront's global edge network to route traffic to the closest available Region. CloudFront will automatically route the traffic based on the client's location and the health of the origins using CloudFront health checks.
Option A (creating Amazon Route 53 health checks with an active-active failover configuration) is not suitable for this scenario as it is primarily used for failover between different endpoints within the same Region, rather than routing traffic to different Regions.
upvoted 1 times
TariqKipkemei 1 week, 6 days ago
Global, Reduce latency, health checks, no failover = Amazon CloudFront
Global ,Reduce latency, health checks, failover, Route traffic = Amazon Route 53 option A has more weight.
upvoted 1 times
Axeashes 2 weeks ago
https://aws.amazon.com/blogs/compute/building-a-multi-region-serverless-application-with-amazon-api-gateway-and-aws-lambda/
upvoted 2 times
DrWatson 3 weeks, 2 days ago
antropaws 3 weeks, 2 days ago
I understand that you can use Route 53 to provide regional failover.
upvoted 1 times
alexandercamachop 3 weeks, 3 days ago
To route traffic to multiple AWS Regions and provide regional failover capabilities for a stateless web application running on AWS Lambda functions invoked by Amazon API Gateway, you can use Amazon Route 53 with an active-active failover configuration.
By creating Amazon Route 53 health checks for each Region and configuring an active-active failover configuration, Route 53 can monitor the health of the endpoints in each Region and route traffic to healthy endpoints. In the event of a failure in one Region, Route 53 automatically routes traffic to the healthy endpoints in other Regions.
This setup ensures high availability and failover capabilities for your web application across multiple AWS Regions.
upvoted 1 times
udo2020 3 weeks, 3 days ago
I think it's A because the keyword is "route" traffic.
upvoted 2 times
omoakin 1 month ago
BBBBBBBBBBBBB
upvoted 1 times
karbob 4 weeks, 1 day ago
CloudFront does not support health checks for routing traffic. is designed primarily for content distribution and caching, rather than for load balancing or traffic routing based on health checks.
upvoted 1 times
examtopictempacc 1 month, 1 week ago
A. I'm not an expert in this area, but I still want to express my opinion. After carefully reviewing the question and thinking about it for a long time, I actually don't know the reason. As I mentioned at the beginning, I'm not an expert in this field.
upvoted 3 times
Rob1L 1 month, 1 week ago
It's A
It's not B because Amazon CloudFront can distribute traffic to multiple origins, but it does not support automatic failover between regions based on health checks. CloudFront is primarily a content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency and high transfer speeds.
upvoted 4 times
y0 1 month, 1 week ago
I agree with A - active-active failover means considering resources across all regions. So, in this case, to distribute traffic across all regions, Route 53 seems good. Cloudfront usage is more towards reducing latency for applications used globally by caching content at edge locations. It somehow does not fit the use case for distributing traffic. Also, not sure of the term "cloudfront healthchecks"
upvoted 1 times
omoakin 1 month, 1 week ago
A
check this out Qtn 3
https://dumpsgate.com/wp-content/uploads/2021/01/SAA-C02.pdf
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
This approach leverages the capabilities of CloudFront's intelligent routing and health checks to automatically distribute traffic across multiple AWS Regions and provide failover capabilities in case of Regional disruptions or unavailability.
upvoted 2 times
nosense 1 month, 1 week ago
B, bcs a cant' provide regional failover
upvoted 3 times
Efren 1 month, 1 week ago
Agreed
upvoted 1 times
Question #448 Topic 1
A company has two VPCs named Management and Production. The Management VPC uses VPNs through a customer gateway to connect to a single device in the data center. The Production VPC uses a virtual private gateway with two attached AWS Direct Connect connections. The Management and Production VPCs both use a single VPC peering connection to allow communication between the applications.
What should a solutions architect do to mitigate any single point of failure in this architecture?
A. Add a set of VPNs between the Management and Production VPCs.
B. Add a second virtual private gateway and attach it to the Management VPC.
C. Add a second set of VPNs to the Management VPC from a second customer gateway device.
D. Add a second VPC peering connection between the Management VPC and the Production VPC.
Community vote distribution
C (100%)
Abrar2022 3 weeks, 3 days ago
(production) VPN 1 > cgw 1
(management) VPN 2 > cgw 2
upvoted 2 times
Abrar2022 3 weeks, 3 days ago
ANSWER IS C
upvoted 1 times
omoakin 1 month, 1 week ago
I agree to C
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
option D is not a valid solution for mitigating single points of failure in the architecture. I apologize for the confusion caused by the incorrect information.
To mitigate single points of failure in the architecture, you can consider implementing option C: adding a second set of VPNs to the Management VPC from a second customer gateway device. This will introduce redundancy at the VPN connection level for the Management VPC, ensuring that if one customer gateway or VPN connection fails, the other connection can still provide connectivity to the data center.
upvoted 2 times
Efren 1 month, 1 week ago
Redundant VPN connections: Instead of relying on a single device in the data center, the Management VPC should have redundant VPN connections established through multiple customer gateways. This will ensure high availability and fault tolerance in case one of the VPN connections or customer gateways fails.
upvoted 3 times
nosense 1 month, 1 week ago
upvoted 1 times
Question #449 Topic 1
A company runs its application on an Oracle database. The company plans to quickly migrate to AWS because of limited resources for the database, backup administration, and data center maintenance. The application uses third-party database features that require privileged access.
Which solution will help the company migrate the database to AWS MOST cost-effectively?
A. Migrate the database to Amazon RDS for Oracle. Replace third-party features with cloud services.
B. Migrate the database to Amazon RDS Custom for Oracle. Customize the database settings to support third-party features.
C. Migrate the database to an Amazon EC2 Amazon Machine Image (AMI) for Oracle. Customize the database settings to support third-party features.
D. Migrate the database to Amazon RDS for PostgreSQL by rewriting the application code to remove dependency on Oracle APEX.
Community vote distribution
B (88%) 13%
TariqKipkemei 1 week, 6 days ago
Custom database features = Amazon RDS Custom for Oracle
upvoted 1 times
antropaws 3 weeks, 2 days ago
Abrar2022 3 weeks, 3 days ago
RDS Custom since it's related to 3rd vendor RDS Custom since it's related to 3rd vendor RDS Custom since it's related to 3rd vendor
upvoted 2 times
omoakin 1 month ago
CCCCCCCCCCCCCCCCCCCCC
upvoted 1 times
aqmdla2002 1 month, 1 week ago
https://aws.amazon.com/about-aws/whats-new/2021/10/amazon-rds-custom-oracle/
upvoted 1 times
hiroohiroo 1 month, 1 week ago
karbob 4 weeks, 1 day ago
Amazon RDS Custom for Oracle, which is not an actual service. !!!!
upvoted 1 times
nosense 1 month, 1 week ago
Option C is also a valid solution, but it is not as cost-effective as option B.
Option C requires the company to manage its own database infrastructure, which can be expensive and time-consuming. Additionally, the company will need to purchase and maintain Oracle licenses.
upvoted 1 times
y0 1 month, 1 week ago
RDS Custom enables the capability to access the underlying database and OS so as to configure additional settings to support 3rd party. This feature is applicable only for Oracle and Postgresql
upvoted 1 times
y0 1 month, 1 week ago
Sorry Oracle and sql server (not posstgresql)
upvoted 1 times
omoakin 1 month, 1 week ago
I will say C cos of this "application uses third-party "
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
Should not it be since for Ec2, the company will have full control over the database and this is the reason that they are moving to AWS in the place "The company plans to quickly migrate to AWS because of limited resources for the database, backup administration, and data center maintenance?"
upvoted 1 times
Efren 1 month, 1 week ago
RDS Custom when is something related to 3rd vendor, for me
upvoted 1 times
nosense 1 month, 1 week ago not sure, but b probably upvoted 2 times
first
Question #450 Topic 1
A company has a three-tier web application that is in a single server. The company wants to migrate the application to the AWS Cloud. The company also wants the application to align with the AWS Well-Architected Framework and to be consistent with AWS recommended best practices for security, scalability, and resiliency.
Which combination of solutions will meet these requirements? (Choose three.)
Create a VPC across two Availability Zones with the application's existing architecture. Host the application with existing architecture on an Amazon EC2 instance in a private subnet in each Availability Zone with EC2 Auto Scaling groups. Secure the EC2 instance with security groups and network access control lists (network ACLs).
Set up security groups and network access control lists (network ACLs) to control access to the database layer. Set up a single Amazon RDS database in a private subnet.
Create a VPC across two Availability Zones. Refactor the application to host the web tier, application tier, and database tier. Host each tier on its own private subnet with Auto Scaling groups for the web tier and application tier.
Use a single Amazon RDS database. Allow database access only from the application tier security group.
Use Elastic Load Balancers in front of the web tier. Control access by using security groups containing references to each layer's security groups.
Use an Amazon RDS database Multi-AZ cluster deployment in private subnets. Allow database access only from application tier security groups.
Community vote distribution
CEF (100%)
marufxplorer 1 week, 3 days ago
I also agree with CEF but chatGPT answer is ACE. A and C is the similar Another Logic F is not True because in the question not mentioned about DB
upvoted 1 times
TariqKipkemei 1 week, 6 days ago
antropaws 3 weeks, 2 days ago
Abrar2022 3 weeks, 3 days ago
C-scalable and resilient
E-high availability of the application
F-Multi-AZ configuration provides high availability
upvoted 3 times
omoakin 1 month ago
B- to control access to database
C-scalable and resilient
E-high availability of the application
upvoted 1 times
lucdt4 1 month ago
CEF
A: application's existing architecture is wrong (single AZ) B: single AZ
D: Single AZ
upvoted 2 times
cloudenthusiast 1 month, 1 week ago
C.
This solution follows the recommended architecture pattern of separating the web, application, and database tiers into different subnets. It provides better security, scalability, and fault tolerance.
By using Elastic Load Balancers (ELBs), you can distribute traffic to multiple instances of the web tier, increasing scalability and availability. Controlling access through security groups allows for fine-grained control and ensures only authorized traffic reaches each layer.
Deploying an Amazon RDS database in a Multi-AZ configuration provides high availability and automatic failover. Placing the database in private subnets enhances security. Allowing database access only from the application tier security groups limits exposure and follows the principle of least privilege.
upvoted 1 times
nosense 1 month, 1 week ago
Only this valid for best practices and well architected
upvoted 4 times
Question #451 Topic 1
A company is migrating its applications and databases to the AWS Cloud. The company will use Amazon Elastic Container Service (Amazon ECS), AWS Direct Connect, and Amazon RDS.
Which activities will be managed by the company's operational team? (Choose three.)
Management of the Amazon RDS infrastructure layer, operating system, and platforms
Creation of an Amazon RDS DB instance and configuring the scheduled maintenance window
Configuration of additional software components on Amazon ECS for monitoring, patch management, log management, and host intrusion detection
Installation of patches for all minor and major database versions for Amazon RDS
Ensure the physical security of the Amazon RDS infrastructure in the data center
Encryption of the data that moves in transit through Direct Connect
Community vote distribution
BCF (86%) 14%
kapit 1 week, 1 day ago
BC & F ( no automatic encryption with direct connect
upvoted 1 times
TariqKipkemei 1 week, 6 days ago
Amazon ECS is a fully managed service, the ops team only focus on building their applications, not the environment. Only option B and F makes sense.
upvoted 1 times
antropaws 3 weeks, 2 days ago
100% BCF.
upvoted 1 times
lucdt4 1 month ago
BCF
B: Mentioned RDS C: Mentioned ECS
F: Mentioned Direct connect
upvoted 2 times
hiroohiroo 1 month, 1 week ago
Yes BCF
upvoted 1 times
omoakin 1 month, 1 week ago
I agree BCF
upvoted 1 times
nosense 1 month, 1 week ago
Question #452 Topic 1
A company runs a Java-based job on an Amazon EC2 instance. The job runs every hour and takes 10 seconds to run. The job runs on a scheduled interval and consumes 1 GB of memory. The CPU utilization of the instance is low except for short surges during which the job uses the maximum CPU available. The company wants to optimize the costs to run the job.
Which solution will meet these requirements?
Use AWS App2Container (A2C) to containerize the job. Run the job as an Amazon Elastic Container Service (Amazon ECS) task on AWS Fargate with 0.5 virtual CPU (vCPU) and 1 GB of memory.
Copy the code into an AWS Lambda function that has 1 GB of memory. Create an Amazon EventBridge scheduled rule to run the code each hour.
Use AWS App2Container (A2C) to containerize the job. Install the container in the existing Amazon Machine Image (AMI). Ensure that the schedule stops the container when the task finishes.
Configure the existing schedule to stop the EC2 instance at the completion of the job and restart the EC2 instance when the next job starts.
Community vote distribution
B (100%)
TariqKipkemei 1 week, 6 days ago
10 seconds to run, optimize the costs, consumes 1 GB of memory = AWS Lambda function.
upvoted 1 times
alexandercamachop 3 weeks, 3 days ago
AWS Lambda automatically scales resources to handle the workload, so you don't have to worry about managing the underlying infrastructure. It provisions the necessary compute resources based on the configured memory size (1 GB in this case) and executes the job in a serverless environment.
By using Amazon EventBridge, you can create a scheduled rule to trigger the Lambda function every hour, ensuring that the job runs on the desired interval.
upvoted 1 times
Yadav_Sanjay 1 month, 1 week ago
B - Within 10 sec and 1 GB Memory (Lambda Memory 128MB to 10GB)
upvoted 2 times
Yadav_Sanjay 1 month, 1 week ago https://docs.aws.amazon.com/lambda/latest/operatorguide/computing-power.html upvoted 1 times
Efren 1 month, 1 week ago
Question #453 Topic 1
A company wants to implement a backup strategy for Amazon EC2 data and multiple Amazon S3 buckets. Because of regulatory requirements, the company must retain backup files for a specific time period. The company must not alter the files for the duration of the retention period.
Which solution will meet these requirements?
Use AWS Backup to create a backup vault that has a vault lock in governance mode. Create the required backup plan.
Use Amazon Data Lifecycle Manager to create the required automated snapshot policy.
Use Amazon S3 File Gateway to create the backup. Configure the appropriate S3 Lifecycle management.
Use AWS Backup to create a backup vault that has a vault lock in compliance mode. Create the required backup plan.
Community vote distribution
D (100%)
Efren Highly Voted 1 month, 1 week ago
D, Governance is like the goverment, they can do things you cannot , like delete files or backups :D Compliance, nobody can!
upvoted 7 times
joshnort 3 days, 11 hours ago
Great analogy
upvoted 1 times
TariqKipkemei Most Recent 1 week, 5 days ago
Must not alter the files for the duration of the retention period = Compliance Mode
upvoted 1 times
antropaws 3 weeks, 2 days ago
dydzah 1 month ago
https://docs.aws.amazon.com/aws-backup/latest/devguide/vault-lock.html
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
nosense 1 month, 1 week ago
D bcs in governance we can delete backup
upvoted 3 times
Question #454 Topic 1
A company has resources across multiple AWS Regions and accounts. A newly hired solutions architect discovers a previous employee did not provide details about the resources inventory. The solutions architect needs to build and map the relationship details of the various workloads across all accounts.
Which solution will meet these requirements in the MOST operationally efficient way?
Use AWS Systems Manager Inventory to generate a map view from the detailed view report.
Use AWS Step Functions to collect workload details. Build architecture diagrams of the workloads manually.
Use Workload Discovery on AWS to generate architecture diagrams of the workloads.
Use AWS X-Ray to view the workload details. Build architecture diagrams with relationships.
Community vote distribution
C (89%) 11%
MrAWSAssociate 1 week, 4 days ago
Option A: AWS SSM offers "Software inventory": Collect software catalog and configuration for your instances.
Option C: Workload Discovery on AWS: is a tool for maintaining an inventory of the AWS resources across your accounts and various Regions and mapping relationships between them, and displaying them in a web UI.
upvoted 1 times
DrWatson 3 weeks, 2 days ago
https://aws.amazon.com/blogs/mt/visualizing-resources-with-workload-discovery-on-aws/
upvoted 1 times
Abrar2022 3 weeks, 3 days ago
AWS Workload Discovery - create diagram, map and visualise AWS resources across AWS accounts and Regions
upvoted 2 times
Abrar2022 3 weeks, 3 days ago
Workload Discovery on AWS can map AWS resources across AWS accounts and Regions and visualize them in a UI provided on the website.
upvoted 1 times
hiroohiroo 1 month, 1 week ago
https://aws.amazon.com/jp/builders-flash/202209/workload-discovery-on-aws/?awsf.filter-name=*all
upvoted 2 times
omoakin 1 month, 1 week ago
Only C makes sense
upvoted 2 times
cloudenthusiast 1 month, 1 week ago
Workload Discovery on AWS is a service that helps visualize and understand the architecture of your workloads across multiple AWS accounts and Regions. It automatically discovers and maps the relationships between resources, providing an accurate representation of the architecture.
upvoted 2 times
Efren 1 month, 1 week ago
Not sure here tbh
To efficiently build and map the relationship details of various workloads across multiple AWS Regions and accounts, you can use the AWS Systems Manager Inventory feature in combination with AWS Resource Groups. Here's a solution that can help you achieve this:
AWS Systems Manager Inventory:
upvoted 1 times
nosense 1 month, 1 week ago
only c mapping relationships
upvoted 1 times
Question #455 Topic 1
A company uses AWS Organizations. The company wants to operate some of its AWS accounts with different budgets. The company wants to
receive alerts and automatically prevent provisioning of additional resources on AWS accounts when the allocated budget threshold is met during a specific period.
Which combination of solutions will meet these requirements? (Choose three.)
Use AWS Budgets to create a budget. Set the budget amount under the Cost and Usage Reports section of the required AWS accounts.
Use AWS Budgets to create a budget. Set the budget amount under the Billing dashboards of the required AWS accounts.
Create an IAM user for AWS Budgets to run budget actions with the required permissions.
Create an IAM role for AWS Budgets to run budget actions with the required permissions.
Add an alert to notify the company when each account meets its budget threshold. Add a budget action that selects the IAM identity created with the appropriate config rule to prevent provisioning of additional resources.
Add an alert to notify the company when each account meets its budget threshold. Add a budget action that selects the IAM identity created with the appropriate service control policy (SCP) to prevent provisioning of additional resources.
Community vote distribution
BDF (71%) ADF (29%)
vesen22 Highly Voted 1 month ago
I don't see why adf has the most voted when almost everyone has chosen bdf, smh
https://acloudguru.com/videos/acg-fundamentals/how-to-set-up-an-aws-billing-and-budget-alert?utm_source=google&utm_medium=paid-search&utm_campaign=cloud-transformation&utm_term=ssi-global-acg-core-dsa&utm_content=free-trial&gclid=Cj0KCQjwmtGjBhDhARIsAEqfDEcDfXdLul2NxgSMxKracIITZimWOtDBRpsJPpx8lS9T4NndKhbUqPIaAlzhEALw_wcB
upvoted 5 times
Abrar2022 Most Recent 3 weeks, 3 days ago
How to create a budget:
Billing console > budget > create budget!
upvoted 1 times
udo2020 1 month, 1 week ago
It is BDF because there is actually a Billing Dashboard available.
upvoted 4 times
hiroohiroo 1 month, 1 week ago
https://docs.aws.amazon.com/ja_jp/awsaccountbilling/latest/aboutv2/view-billing-dashboard.html
upvoted 4 times
y0 1 month, 1 week ago
BDF - Budgets can be set from the billing dashboard in AWS console
upvoted 2 times
cloudenthusiast 1 month, 1 week ago
Currently, AWS does not have a specific feature called "AWS Billing Dashboards."
upvoted 4 times
RainWhisper 1 month ago
https://awslabs.github.io/scale-out-computing-on-aws/workshops/TKO-Scale-Out-Computing/modules/071-budgets/
upvoted 1 times
Efren 1 month, 1 week ago
if im not wrong, those are correct
upvoted 2 times
Question #456 Topic 1
A company runs applications on Amazon EC2 instances in one AWS Region. The company wants to back up the EC2 instances to a second Region. The company also wants to provision EC2 resources in the second Region and manage the EC2 instances centrally from one AWS account.
Which solution will meet these requirements MOST cost-effectively?
Create a disaster recovery (DR) plan that has a similar number of EC2 instances in the second Region. Configure data replication.
Create point-in-time Amazon Elastic Block Store (Amazon EBS) snapshots of the EC2 instances. Copy the snapshots to the second Region periodically.
Create a backup plan by using AWS Backup. Configure cross-Region backup to the second Region for the EC2 instances.
Deploy a similar number of EC2 instances in the second Region. Use AWS DataSync to transfer the data from the source Region to the second Region.
Community vote distribution
C (100%)
TariqKipkemei 1 week, 5 days ago
omoakin 4 weeks, 1 day ago
CCCCC
. Create a backup plan by using AWS Backup. Configure cross-Region backup to the second Region for the EC2 instances.
upvoted 1 times
Blingy 1 month ago
CCCCCCC
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
Using AWS Backup, you can create backup plans that automate the backup process for your EC2 instances. By configuring cross-Region backup, you can ensure that backups are replicated to the second Region, providing a disaster recovery capability. This solution is cost-effective as it leverages AWS Backup's built-in features and eliminates the need for manual snapshot management or deploying and managing additional EC2 instances in the second Region.
upvoted 4 times
Efren 1 month, 1 week ago
C, i would say same, always AWS Backup
upvoted 1 times
Question #457 Topic 1
A company that uses AWS is building an application to transfer data to a product manufacturer. The company has its own identity provider (IdP). The company wants the IdP to authenticate application users while the users use the application to transfer data. The company must use
Applicability Statement 2 (AS2) protocol. Which solution will meet these requirements?
Use AWS DataSync to transfer the data. Create an AWS Lambda function for IdP authentication.
Use Amazon AppFlow flows to transfer the data. Create an Amazon Elastic Container Service (Amazon ECS) task for IdP authentication.
Use AWS Transfer Family to transfer the data. Create an AWS Lambda function for IdP authentication.
Use AWS Storage Gateway to transfer the data. Create an Amazon Cognito identity pool for IdP authentication.
Community vote distribution
C (67%) D (33%)
TariqKipkemei 1 week, 5 days ago
Option C stands out stronger because AWS Transfer Family securely scales your recurring business-to-business file transfers to AWS Storage services using SFTP, FTPS, FTP, and AS2 protocols.
And AWS Lambda can be used to authenticate users with the company's IdP.
upvoted 1 times
dydzah 1 month ago
https://docs.aws.amazon.com/transfer/latest/userguide/custom-identity-provider-users.html
upvoted 1 times
examtopictempacc 1 month ago
C is correct. AWS Transfer Family supports the AS2 protocol, which is required by the company. Also, AWS Lambda can be used to authenticate users with the company's IdP, which meets the company's requirement.
upvoted 1 times
EA100 1 month, 1 week ago
Answer - D
AS2 is a widely used protocol for secure and reliable data transfer. In this scenario, the company wants to transfer data using the AS2 protocol and authenticate application users using their own identity provider (IdP). AWS Storage Gateway provides a hybrid cloud storage solution that enables data transfer between on-premises environments and AWS.
By using AWS Storage Gateway, you can set up a gateway that supports the AS2 protocol for data transfer. Additionally, you can configure authentication using an Amazon Cognito identity pool. Amazon Cognito provides a comprehensive authentication and user management service that integrates with various identity providers, including your own IdP.
Therefore, Option D is the correct solution as it leverages AWS Storage Gateway for AS2 data transfer and allows authentication using an Amazon Cognito identity pool integrated with the company's IdP.
upvoted 1 times
hiroohiroo 1 month, 1 week ago
https://repost.aws/articles/ARo2ihKKThT2Cue5j6yVUgsQ/articles/ARo2ihKKThT2Cue5j6yVUgsQ/aws-transfer-family-announces-support-for-sending-as2-messages-over-https?
upvoted 1 times
omoakin 1 month, 1 week ago
C is correct
upvoted 1 times
nosense 1 month, 1 week ago
Option D looks the better option because it is more secure, scalable, cost-effective, and easy to use than option C.
upvoted 1 times
omoakin 1 month, 1 week ago
This is a new Qtn n AS2 is newly supported by AWS Transfer family good timing to know ur stuffs.
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
AWS Storage Gateway supports the AS2 protocol for transferring data. By using AWS Storage Gateway, the company can integrate its own IdP authentication by creating an Amazon Cognito identity pool. Amazon Cognito provides user authentication and authorization capabilities, allowing the company to authenticate application users using its own IdP.
AWS Transfer Family does not currently support the AS2 protocol. AS2 is a specific protocol used for secure and reliable data transfer, often used in business-to-business (B2B) scenarios. In this case, option C, which suggests using AWS Transfer Family, would not meet the requirement of using the AS2 protocol.
upvoted 2 times
omoakin 1 month, 1 week ago
AWS Transfer Family now supports the Applicability Statement 2 (AS2) protocol, complementing existing protocol support for SFTP, FTPS, and FTP
upvoted 1 times
y0 1 month, 1 week ago
This is not a case for storage gateway which is more used for a hybrid like environment. Here, to transfer data, we can think or Datasync or Transfer family and considering AS2 protocol, transfer family looks good
upvoted 2 times
Efren 1 month, 1 week ago
ChatGP
To meet the requirements of using an identity provider (IdP) for user authentication and the AS2 protocol for data transfer, you can implement the following solution:
AWS Transfer Family: Use AWS Transfer Family, specifically AWS Transfer for SFTP or FTPS, to handle the data transfer using the AS2 protocol. AWS Transfer for SFTP and FTPS provide fully managed, highly available SFTP and FTPS servers in the AWS Cloud.
Not sure about Lamdba tho
upvoted 2 times
Efren 1 month, 1 week ago
Maybe yes
The Lambda authorizer authenticates the token with the third-party identity provider.
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
Also from ChatGPT
AWS Transfer Family supports multiple protocols, including AS2, and can be used for data transfer. By utilizing AWS Transfer Family, the company can integrate its own IdP authentication by creating an AWS Lambda function.
Both options D and C are valid solutions for the given requirements. The choice between them would depend on additional factors such as specific preferences, existing infrastructure, and overall architectural considerations.
upvoted 2 times
Question #458 Topic 1
A solutions architect is designing a RESTAPI in Amazon API Gateway for a cash payback service. The application requires 1 GB of memory and 2 GB of storage for its computation resources. The application will require that the data is in a relational format.
Which additional combination ofAWS services will meet these requirements with the LEAST administrative effort? (Choose two.)
Amazon EC2
AWS Lambda
Amazon RDS
Amazon DynamoDB
Amazon Elastic Kubernetes Services (Amazon EKS)
Community vote distribution
BC (77%) AC (23%)
cloudenthusiast Highly Voted 1 month, 1 week ago
"The application will require that the data is in a relational format" so DynamoDB is out. RDS is the choice. Lambda is severless.
upvoted 6 times
TariqKipkemei Most Recent 1 week, 5 days ago
AWS Lambda and Amazon RDS
upvoted 1 times
handsonlabsaws 3 weeks, 4 days ago
"2 GB of storage for its COMPUTATION resources" the maximum for Lambda is 512MB.
upvoted 3 times
r3mo 2 weeks, 2 days ago
At first I was thinking the same. But the computation memery for the lambda function is 1gb not 2gb. Hence. if you go to basic settings when you create the lambda function you can sellect a in the memori settings the 1024 MB (1Gb) and that solve the problem.
upvoted 1 times
Efren 1 month, 1 week ago
Relational Data RDS and computing for Lambda
upvoted 3 times
nosense 1 month, 1 week ago
bc for me
upvoted 2 times
Question #459 Topic 1
A company uses AWS Organizations to run workloads within multiple AWS accounts. A tagging policy adds department tags to AWS resources when the company creates tags.
An accounting team needs to determine spending on Amazon EC2 consumption. The accounting team must determine which departments are responsible for the costs regardless ofAWS account. The accounting team has access to AWS Cost Explorer for all AWS accounts within the
organization and needs to access all reports from Cost Explorer.
Which solution meets these requirements in the MOST operationally efficient way?
From the Organizations management account billing console, activate a user-defined cost allocation tag named department. Create one cost report in Cost Explorer grouping by tag name, and filter by EC2.
From the Organizations management account billing console, activate an AWS-defined cost allocation tag named department. Create one cost report in Cost Explorer grouping by tag name, and filter by EC2.
From the Organizations member account billing console, activate a user-defined cost allocation tag named department. Create one cost report in Cost Explorer grouping by the tag name, and filter by EC2.
From the Organizations member account billing console, activate an AWS-defined cost allocation tag named department. Create one cost report in Cost Explorer grouping by tag name, and filter by EC2.
Community vote distribution
A (100%)
TariqKipkemei 1 week, 5 days ago
From the Organizations management account billing console, activate a user-defined cost allocation tag named department. Create one cost report in Cost Explorer grouping by tag name, and filter by EC2.
upvoted 1 times
luisgu 1 month ago
hiroohiroo 1 month, 1 week ago
cloudenthusiast 1 month, 1 week ago
By activating a user-defined cost allocation tag named "department" and creating a cost report in Cost Explorer that groups by the tag name and filters by EC2, the accounting team will be able to track and attribute costs to specific departments across all AWS accounts within the organization. This approach allows for consistent cost allocation and reporting regardless of the AWS account structure.
upvoted 3 times
nosense 1 month, 1 week ago
Question #460 Topic 1
A company wants to securely exchange data between its software as a service (SaaS) application Salesforce account and Amazon S3. The company must encrypt the data at rest by using AWS Key Management Service (AWS KMS) customer managed keys (CMKs). The company must also encrypt the data in transit. The company has enabled API access for the Salesforce account.
Create AWS Lambda functions to transfer the data securely from Salesforce to Amazon S3.
Create an AWS Step Functions workflow. Define the task to transfer the data securely from Salesforce to Amazon S3.
Create Amazon AppFlow flows to transfer the data securely from Salesforce to Amazon S3.
Create a custom connector for Salesforce to transfer the data securely from Salesforce to Amazon S3.
Community vote distribution
C (100%)
TariqKipkemei 1 week, 5 days ago
With Amazon AppFlow automate bi-directional data flows between SaaS applications and AWS services in just a few clicks
upvoted 1 times
DrWatson 3 weeks, 2 days ago
https://docs.aws.amazon.com/appflow/latest/userguide/what-is-appflow.html
upvoted 1 times
Abrar2022 3 weeks, 3 days ago
All you need to know is that AWS AppFlow securely transfers data between different SaaS applications and AWS services
upvoted 1 times
hiroohiroo 1 month, 1 week ago
cloudenthusiast 1 month, 1 week ago
Amazon AppFlow is a fully managed integration service that allows you to securely transfer data between different SaaS applications and AWS services. It provides built-in encryption options and supports encryption in transit using SSL/TLS protocols. With AppFlow, you can configure the data transfer flow from Salesforce to Amazon S3, ensuring data encryption at rest by utilizing AWS KMS CMKs.
upvoted 3 times
Efren 1 month, 1 week ago
Saas with another service, AppFlow
upvoted 1 times
Question #461 Topic 1
A company is developing a mobile gaming app in a single AWS Region. The app runs on multiple Amazon EC2 instances in an Auto Scaling group.
The company stores the app data in Amazon DynamoDB. The app communicates by using TCP traffic and UDP traffic between the users and the servers. The application will be used globally. The company wants to ensure the lowest possible latency for all users.
Which solution will meet these requirements?
Use AWS Global Accelerator to create an accelerator. Create an Application Load Balancer (ALB) behind an accelerator endpoint that uses Global Accelerator integration and listening on the TCP and UDP ports. Update the Auto Scaling group to register instances on the ALB.
Use AWS Global Accelerator to create an accelerator. Create a Network Load Balancer (NLB) behind an accelerator endpoint that uses Global Accelerator integration and listening on the TCP and UDP ports. Update the Auto Scaling group to register instances on the NLB.
Create an Amazon CloudFront content delivery network (CDN) endpoint. Create a Network Load Balancer (NLB) behind the endpoint and
listening on the TCP and UDP ports. Update the Auto Scaling group to register instances on the NLB. Update CloudFront to use the NLB as the origin.
Create an Amazon CloudFront content delivery network (CDN) endpoint. Create an Application Load Balancer (ALB) behind the endpoint
and listening on the TCP and UDP ports. Update the Auto Scaling group to register instances on the ALB. Update CloudFront to use the ALB as the origin.
Community vote distribution
B (100%)
TariqKipkemei 1 week, 1 day ago
TCP and UDP = global accelerator and Network Load Balancer
upvoted 1 times
antropaws 3 weeks, 2 days ago
eddie5049 1 month, 1 week ago
hiroohiroo 1 month, 1 week ago
AWS Global Accelerator+NLB
upvoted 3 times
Efren 1 month, 1 week ago
UDP, Global Accelerator plus NLB
upvoted 1 times
nosense 1 month, 1 week ago
AWS Global Accelerator is a better solution for the mobile gaming app than CloudFront
upvoted 2 times
Question #462 Topic 1
A company has an application that processes customer orders. The company hosts the application on an Amazon EC2 instance that saves the orders to an Amazon Aurora database. Occasionally when traffic is high the workload does not process orders fast enough.
What should a solutions architect do to write the orders reliably to the database as quickly as possible?
Increase the instance size of the EC2 instance when traffic is high. Write orders to Amazon Simple Notification Service (Amazon SNS). Subscribe the database endpoint to the SNS topic.
Write orders to an Amazon Simple Queue Service (Amazon SQS) queue. Use EC2 instances in an Auto Scaling group behind an Application Load Balancer to read from the SQS queue and process orders into the database.
Write orders to Amazon Simple Notification Service (Amazon SNS). Subscribe the database endpoint to the SNS topic. Use EC2 instances in an Auto Scaling group behind an Application Load Balancer to read from the SNS topic.
Write orders to an Amazon Simple Queue Service (Amazon SQS) queue when the EC2 instance reaches CPU threshold limits. Use scheduled scaling of EC2 instances in an Auto Scaling group behind an Application Load Balancer to read from the SQS queue and process orders into
the database.
Community vote distribution
B (100%)
cloudenthusiast Highly Voted 1 month, 1 week ago
By decoupling the write operation from the processing operation using SQS, you ensure that the orders are reliably stored in the queue, regardless of the processing capacity of the EC2 instances. This allows the processing to be performed at a scalable rate based on the available EC2 instances, improving the overall reliability and speed of order processing.
upvoted 5 times
TariqKipkemei Most Recent 1 week, 1 day ago
Write orders to an Amazon Simple Queue Service (Amazon SQS) queue. Use EC2 instances in an Auto Scaling group behind an Application Load Balancer to read from the SQS queue and process orders into the database.
upvoted 1 times
antropaws 3 weeks, 2 days ago
100% B.
upvoted 1 times
omoakin 4 weeks, 1 day ago
BBBBBBBBBB
upvoted 1 times
Question #463 Topic 1
An IoT company is releasing a mattress that has sensors to collect data about a user’s sleep. The sensors will send data to an Amazon S3 bucket.
The sensors collect approximately 2 MB of data every night for each mattress. The company must process and summarize the data for each mattress. The results need to be available as soon as possible. Data processing will require 1 GB of memory and will finish within 30 seconds.
Which solution will meet these requirements MOST cost-effectively?
Use AWS Glue with a Scala job
Use Amazon EMR with an Apache Spark script
Use AWS Lambda with a Python script
Use AWS Glue with a PySpark job
Community vote distribution
C (100%)
antropaws 3 weeks, 2 days ago
I reckon C, but I would consider other well founded options.
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
AWS Lambda charges you based on the number of invocations and the execution time of your function. Since the data processing job is relatively small (2 MB of data), Lambda is a cost-effective choice. You only pay for the actual usage without the need to provision and maintain infrastructure.
upvoted 4 times
joechen2023 1 week, 2 days ago
but the question states "Data processing will require 1 GB of memory and will finish within 30 seconds." so it can't be C as Lambda support maximum 512M
upvoted 1 times
nosense 1 month, 1 week ago
c anyway the MOST cost-effectively
upvoted 2 times
Question #464 Topic 1
A company hosts an online shopping application that stores all orders in an Amazon RDS for PostgreSQL Single-AZ DB instance. Management wants to eliminate single points of failure and has asked a solutions architect to recommend an approach to minimize database downtime without requiring any changes to the application code.
Which solution meets these requirements?
Convert the existing database instance to a Multi-AZ deployment by modifying the database instance and specifying the Multi-AZ option.
Create a new RDS Multi-AZ deployment. Take a snapshot of the current RDS instance and restore the new Multi-AZ deployment with the snapshot.
Create a read-only replica of the PostgreSQL database in another Availability Zone. Use Amazon Route 53 weighted record sets to distribute requests across the databases.
Place the RDS for PostgreSQL database in an Amazon EC2 Auto Scaling group with a minimum group size of two. Use Amazon Route 53 weighted record sets to distribute requests across instances.
Community vote distribution
A (100%)
TariqKipkemei 1 week ago
Eliminate single points of failure = Multi-AZ deployment
upvoted 1 times
antropaws 3 weeks, 2 days ago
A) https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZSingleStandby.html#Concepts.MultiAZ.Migrating
upvoted 1 times
Abrar2022 3 weeks, 3 days ago
"minimize database downtime" so why create a new DB just modify the existing one so no time is wasted.
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
Compared to other solutions that involve creating new instances, restoring snapshots, or setting up replication manually, converting to a Multi-AZ deployment is a simpler and more streamlined approach with lower overhead.
Overall, option A offers a cost-effective and efficient way to minimize database downtime without requiring significant changes or additional complexities.
upvoted 2 times
Efren 1 month, 1 week ago
A for HA, but also read replica can convert itself to master if the master is down... so not sure if C?
upvoted 1 times
Efren 1 month, 1 week ago
Sorry, the Route 53 doesnt make sense to sent requests to RR , what if is a write?
upvoted 1 times
nosense 1 month, 1 week ago
Question #465 Topic 1
A company is developing an application to support customer demands. The company wants to deploy the application on multiple Amazon EC2 Nitro-based instances within the same Availability Zone. The company also wants to give the application the ability to write to multiple block
storage volumes in multiple EC2 Nitro-based instances simultaneously to achieve higher application availability. Which solution will meet these requirements?
A. Use General Purpose SSD (gp3) EBS volumes with Amazon Elastic Block Store (Amazon EBS) Multi-Attach
B. Use Throughput Optimized HDD (st1) EBS volumes with Amazon Elastic Block Store (Amazon EBS) Multi-Attach
C. Use Provisioned IOPS SSD (io2) EBS volumes with Amazon Elastic Block Store (Amazon EBS) Multi-Attach
D. Use General Purpose SSD (gp2) EBS volumes with Amazon Elastic Block Store (Amazon EBS) Multi-Attach
Community vote distribution
C (78%) 11% 11%
TariqKipkemei 1 week ago
Multi-Attach is supported exclusively on Provisioned IOPS SSD (io1 and io2) volumes.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volumes-multi.html#:~:text=Multi%2DAttach%20is%20supported%20exclusively%20on%20Provisioned%20IOPS%20SSD%20(io1%20and%20io2)%20volume s.
upvoted 1 times
Axeashes 2 weeks ago
Multi-Attach is supported exclusively on Provisioned IOPS SSD (io1 and io2) volumes. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volumes-multi.html
upvoted 1 times
Uzi_m 2 weeks, 6 days ago
The correct answer is A.
Currently, Multi Attach EBS feature is supported by gp3 volumes also.
Multi-Attach is supported for certain EBS volume types, including io1, io2, gp3, st1, and sc1 volumes.
upvoted 1 times
AshishRocks 3 weeks, 2 days ago
Answer should be D
upvoted 1 times
AshishRocks 3 weeks, 2 days ago
By ChatGPT - Create General Purpose SSD (gp2) volumes: Provision multiple gp2 volumes with the required capacity for your application.
upvoted 1 times
AshishRocks 3 weeks, 2 days ago
Multi-Attach does not support Provisioned IOPS SSD (io2) volumes. Multi-Attach is currently available only for General Purpose SSD (gp2), Throughput Optimized HDD (st1), and Cold HDD (sc1) EBS volumes.
upvoted 1 times
Abrar2022 3 weeks, 3 days ago
Multi-Attach is supported exclusively on Provisioned IOPS SSD (io1 or io2) volumes.
upvoted 1 times
elmogy 1 month ago
only io1/io2 supports Multi-Attach
upvoted 2 times
Uzi_m 2 weeks, 6 days ago
Multi-Attach is supported for certain EBS volume types, including io1, io2, gp3, st1, and sc1 volumes.
upvoted 1 times
examtopictempacc 1 month ago
only io1/io2 supports Multi-Attach
upvoted 2 times
VIad 1 month, 1 week ago
Option D suggests using General Purpose SSD (gp2) EBS volumes with Amazon EBS Multi-Attach. While gp2 volumes support multi-attach, gp3 volumes offer a more cost-effective solution with enhanced performance characteristics.
upvoted 1 times
VIad 1 month, 1 week ago
I'm sorry :
Multi-Attach enabled volumes can be attached to up to 16 instances built on the Nitro System that are in the same Availability Zone. Multi-Attach is supported exclusively on Provisioned IOPS SSD (io1 or io2) volumes.
upvoted 2 times
VIad 1 month, 1 week ago
The answer is C:
upvoted 1 times
EA100 1 month, 1 week ago
Answer - C
C. Use Provisioned IOPS SSD (io2) EBS volumes with Amazon Elastic Block Store (Amazon EBS) Multi-Attach.
While both option C and option D can support Amazon EBS Multi-Attach, using Provisioned IOPS SSD (io2) EBS volumes provides higher performance and lower latency compared to General Purpose SSD (gp2) volumes. This makes io2 volumes better suited for demanding and mission-critical applications where performance is crucial.
If the goal is to achieve higher application availability and ensure optimal performance, using Provisioned IOPS SSD (io2) EBS volumes with Multi-Attach will provide the best results.
upvoted 1 times
nosense 1 month, 1 week ago
c is right
Amazon EBS Multi-Attach enables you to attach a single Provisioned IOPS SSD (io1 or io2) volume to multiple instances that are in the same Availability Zone.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volumes-multi.html nothing about gp
upvoted 2 times
cloudenthusiast 1 month, 1 week ago
Given that the scenario does not mention any specific requirements for high-performance or specific IOPS needs, using General Purpose SSD (gp2) EBS volumes with Amazon EBS Multi-Attach (option D) is typically the more cost-effective and suitable choice. General Purpose SSD (gp2) volumes provide a good balance of performance and cost, making them well-suited for general-purpose workloads.
upvoted 1 times
elmogy 1 month ago
the question has not mentioned anything about cost-effective solution. only io1/io2 supports Multi-Attach
plus fyi, gp3 is the one gives a good balance of performance and cost. so gp2 is wrong in every way
upvoted 1 times
omoakin 1 month, 1 week ago
I agree
General Purpose SSD (gp2) volumes are the most common volume type. They were designed to be a cost-effective storage option for a wide variety of workloads. Gp2 volumes cover system volumes, dev and test environments, and various low-latency apps.
upvoted 1 times
y0 1 month, 1 week ago
gp2 - IOPS 16000
Nitro - IOPS 64000 - supported by io2. C is correct
upvoted 1 times
Question #466 Topic 1
A company designed a stateless two-tier application that uses Amazon EC2 in a single Availability Zone and an Amazon RDS Multi-AZ DB instance. New company management wants to ensure the application is highly available.
What should a solutions architect do to meet this requirement?
A. Configure the application to use Multi-AZ EC2 Auto Scaling and create an Application Load Balancer
B. Configure the application to take snapshots of the EC2 instances and send them to a different AWS Region
C. Configure the application to use Amazon Route 53 latency-based routing to feed requests to the application
D. Configure Amazon Route 53 rules to handle incoming requests and create a Multi-AZ Application Load Balancer
Community vote distribution
A (100%)
nosense Highly Voted 1 month, 1 week ago
it's A
upvoted 5 times
TariqKipkemei Most Recent 1 week ago
Highly available = Multi-AZ EC2 Auto Scaling and Application Load Balancer.
upvoted 1 times
antropaws 3 weeks, 2 days ago
cloudenthusiast 1 month, 1 week ago
By combining Multi-AZ EC2 Auto Scaling and an Application Load Balancer, you achieve high availability for the EC2 instances hosting your stateless two-tier application.
upvoted 4 times
Question #467 Topic 1
A company uses AWS Organizations. A member account has purchased a Compute Savings Plan. Because of changes in the workloads inside the member account, the account no longer receives the full benefit of the Compute Savings Plan commitment. The company uses less than 50% of
its purchased compute power.
A. Turn on discount sharing from the Billing Preferences section of the account console in the member account that purchased the Compute Savings Plan.
B. Turn on discount sharing from the Billing Preferences section of the account console in the company's Organizations management account.
C. Migrate additional compute workloads from another AWS account to the account that has the Compute Savings Plan.
D. Sell the excess Savings Plan commitment in the Reserved Instance Marketplace.
Community vote distribution
B (73%) D (27%)
norris81 Highly Voted 1 month, 1 week ago
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/ri-turn-off.html
Sign in to the AWS Management Console and open the AWS Billing console at https://console.aws.amazon.com/billing/
.
Note
Ensure you're logged in to the management account of your AWS Organizations.
upvoted 5 times
live_reply_developers Most Recent 5 days, 13 hours ago
"For example, you might want to sell Reserved Instances after moving instances to a new AWS Region, changing to a new instance type, ending projects before the term expiration, when your business needs change, or if you have unneeded capacity."
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ri-market-general.html
upvoted 1 times
TariqKipkemei 1 week ago
answer is B.
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/ri-turn-
off.html#:~:text=choose%20Save.-,Turning%20on%20shared%20reserved%20instances%20and%20Savings%20Plans%20discounts,-
You%20can%20use
upvoted 1 times
Felix_br 3 weeks, 3 days ago
The company uses less than 50% of its purchased compute power.
For this reason i believe D is the best solution : https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ri-market-general.html
upvoted 2 times
Abrar2022 3 weeks, 3 days ago
The company Organization's management account can turn on/off shared reserved instances.
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
To summarize, option C (Migrate additional compute workloads from another AWS account to the account that has the Compute Savings Plan) is a valid solution to address the underutilization of the Compute Savings Plan. However, it involves workload migration and may require careful planning and coordination. Consider the feasibility and impact of migrating workloads before implementing this solution.
upvoted 2 times
EA100 1 month, 1 week ago
Answer - C
If a member account within AWS Organizations has purchased a Compute Savings Plan
upvoted 1 times
EA100 1 month, 1 week ago
Asnwer - C
upvoted 1 times
Question #468 Topic 1
A company is developing a microservices application that will provide a search catalog for customers. The company must use REST APIs to
present the frontend of the application to users. The REST APIs must access the backend services that the company hosts in containers in private VPC subnets.
Which solution will meet these requirements?
A. Design a WebSocket API by using Amazon API Gateway. Host the application in Amazon Elastic Container Service (Amazon ECS) in a private subnet. Create a private VPC link for API Gateway to access Amazon ECS.
B. Design a REST API by using Amazon API Gateway. Host the application in Amazon Elastic Container Service (Amazon ECS) in a private subnet. Create a private VPC link for API Gateway to access Amazon ECS.
C. Design a WebSocket API by using Amazon API Gateway. Host the application in Amazon Elastic Container Service (Amazon ECS) in a private subnet. Create a security group for API Gateway to access Amazon ECS.
D. Design a REST API by using Amazon API Gateway. Host the application in Amazon Elastic Container Service (Amazon ECS) in a private subnet. Create a security group for API Gateway to access Amazon ECS.
Community vote distribution
B (100%)
Axeashes 1 week ago
https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-private-integration.html
upvoted 1 times
TariqKipkemei 1 week ago
A VPC link is a resource in Amazon API Gateway that allows for connecting API routes to private resources inside a VPC.
upvoted 1 times
samehpalass 1 week, 1 day ago
B is the right choice
upvoted 1 times
Yadav_Sanjay 1 week, 3 days ago
Why Not D
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
REST API with Amazon API Gateway: REST APIs are the appropriate choice for providing the frontend of the microservices application. Amazon API Gateway allows you to design, deploy, and manage REST APIs at scale.
Amazon ECS in a Private Subnet: Hosting the application in Amazon ECS in a private subnet ensures that the containers are securely deployed within the VPC and not directly exposed to the public internet.
Private VPC Link: To enable the REST API in API Gateway to access the backend services hosted in Amazon ECS, you can create a private VPC link. This establishes a private network connection between the API Gateway and ECS containers, allowing secure communication without traversing the public internet.
upvoted 3 times
nosense 1 month, 1 week ago
b is right, bcs vpc link provided security connection
upvoted 2 times
Question #469
Topic 1
A company stores raw collected data in an Amazon S3 bucket. The data is used for several types of analytics on behalf of the company's
customers. The type of analytics requested determines the access pattern on the S3 objects.
The company cannot predict or control the access pattern. The company wants to reduce its S3 costs.
Which solution will meet these requirements?
Use S3 replication to transition infrequently accessed objects to S3 Standard-Infrequent Access (S3 Standard-IA)
Use S3 Lifecycle rules to transition objects from S3 Standard to Standard-Infrequent Access (S3 Standard-IA)
Use S3 Lifecycle rules to transition objects from S3 Standard to S3 Intelligent-Tiering
Use S3 Inventory to identify and transition objects that have not been accessed from S3 Standard to S3 Intelligent-Tiering
Community vote distribution
C (100%)
TariqKipkemei 1 week ago
Cannot predict access pattern = S3 Intelligent-Tiering.
upvoted 1 times
Efren 1 month, 1 week ago
Not known patterns, Intelligent Tier
upvoted 3 times
nosense 1 month, 1 week ago
S3 Inventory can't to move files to another class
upvoted 3 times
Question #470 Topic 1
A company has applications hosted on Amazon EC2 instances with IPv6 addresses. The applications must initiate communications with other
external applications using the internet. However the company’s security policy states that any external service cannot initiate a connection to the EC2 instances.
What should a solutions architect recommend to resolve this issue?
A. Create a NAT gateway and make it the destination of the subnet's route table
B. Create an internet gateway and make it the destination of the subnet's route table
C. Create a virtual private gateway and make it the destination of the subnet's route table
D. Create an egress-only internet gateway and make it the destination of the subnet's route table
Community vote distribution
D (100%)
wRhlH 3 days, 13 hours ago
For exam,
egress-only internet gateway: IPv6 NAT gateway: IPv4
upvoted 1 times
TariqKipkemei 1 week ago
Outbound traffic only = Create an egress-only internet gateway and make it the destination of the subnet's route table
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
An egress-only internet gateway (EIGW) is specifically designed for IPv6-only VPCs and provides outbound IPv6 internet access while blocking inbound IPv6 traffic. It satisfies the requirement of preventing external services from initiating connections to the EC2 instances while allowing the instances to initiate outbound communications.
upvoted 4 times
RainWhisper 3 weeks, 2 days ago
Enable outbound IPv6 traffic using an egress-only internet gateway https://docs.aws.amazon.com/vpc/latest/userguide/egress-only-internet-gateway.html
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
Since the company's security policy explicitly states that external services cannot initiate connections to the EC2 instances, using a NAT gateway (option A) would not be suitable. A NAT gateway allows outbound connections from private subnets to the internet, but it does not restrict inbound connections from external sources.
upvoted 5 times
radev 1 month, 2 weeks ago
Egress-Only internet Gateway
upvoted 3 times
Question #471 Topic 1
A company is creating an application that runs on containers in a VPC. The application stores and accesses data in an Amazon S3 bucket. During the development phase, the application will store and access 1 TB of data in Amazon S3 each day. The company wants to minimize costs and wants to prevent traffic from traversing the internet whenever possible.
Which solution will meet these requirements?
A. Enable S3 Intelligent-Tiering for the S3 bucket
B. Enable S3 Transfer Acceleration for the S3 bucket
C. Create a gateway VPC endpoint for Amazon S3. Associate this endpoint with all route tables in the VPC
D. Create an interface endpoint for Amazon S3 in the VPC. Associate this endpoint with all route tables in the VPC
Community vote distribution
C (100%)
TariqKipkemei 1 week ago
Prevent traffic from traversing the internet = Gateway VPC endpoint for S3.
upvoted 1 times
Anmol_1010 1 month ago
Key word transversing to internet
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
Gateway VPC Endpoint: A gateway VPC endpoint enables private connectivity between a VPC and Amazon S3. It allows direct access to Amazon S3 without the need for internet gateways, NAT devices, VPN connections, or AWS Direct Connect.
Minimize Internet Traffic: By creating a gateway VPC endpoint for Amazon S3 and associating it with all route tables in the VPC, the traffic between the VPC and Amazon S3 will be kept within the AWS network. This helps in minimizing data transfer costs and prevents the need for traffic to traverse the internet.
Cost-Effective: With a gateway VPC endpoint, the data transfer between the application running in the VPC and the S3 bucket stays within the AWS network, reducing the need for data transfer across the internet. This can result in cost savings, especially when dealing with large amounts of data.
upvoted 4 times
cloudenthusiast 1 month, 1 week ago
Option B (Enable S3 Transfer Acceleration for the S3 bucket) is a feature that uses the CloudFront global network to accelerate data transfers to and from Amazon S3. While it can improve data transfer speed, it still involves traffic traversing the internet and doesn't directly address the goal of minimizing costs and preventing internet traffic whenever possible.
upvoted 1 times
Efren 1 month, 1 week ago
Gateway endpoint for S3
upvoted 2 times
nosense 1 month, 1 week ago
vpc endpoint for s3
upvoted 4 times
Question #472 Topic 1
A company has a mobile chat application with a data store based in Amazon DynamoDB. Users would like new messages to be read with as little latency as possible. A solutions architect needs to design an optimal solution that requires minimal application changes.
Which method should the solutions architect select?
A. Configure Amazon DynamoDB Accelerator (DAX) for the new messages table. Update the code to use the DAX endpoint.
B. Add DynamoDB read replicas to handle the increased read load. Update the application to point to the read endpoint for the read replicas.
C. Double the number of read capacity units for the new messages table in DynamoDB. Continue to use the existing DynamoDB endpoint.
D. Add an Amazon ElastiCache for Redis cache to the application stack. Update the application to point to the Redis cache endpoint instead of DynamoDB.
Community vote distribution
A (100%)
haoAWS 2 days, 7 hours ago
Read replica does improve the read speed, but it cannot improve the latency because there is always latency between replicas. So A works and B not work.
upvoted 1 times
mattcl 5 days, 23 hours ago
C , "requires minimal application changes"
upvoted 1 times
TariqKipkemei 1 week ago
little latency = Amazon DynamoDB Accelerator (DAX) .
upvoted 1 times
DrWatson 3 weeks, 2 days ago
I go with A https://aws.amazon.com/blogs/mobile/building-a-full-stack-chat-application-with-aws-and-nextjs/ but I have some doubts about this https://aws.amazon.com/blogs/database/how-to-build-a-chat-application-with-amazon-elasticache-for-redis/
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
Amazon DynamoDB Accelerator (DAX): DAX is an in-memory cache for DynamoDB that provides low-latency access to frequently accessed data. By configuring DAX for the new messages table, read requests for the table will be served from the DAX cache, significantly reducing the latency.
Minimal Application Changes: With DAX, the application code can be updated to use the DAX endpoint instead of the standard DynamoDB endpoint. This change is relatively minimal and does not require extensive modifications to the application's data access logic.
Low Latency: DAX caches frequently accessed data in memory, allowing subsequent read requests for the same data to be served with minimal latency. This ensures that new messages can be read by users with minimal delay.
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
Option B (Add DynamoDB read replicas) involves creating read replicas to handle the increased read load, but it may not directly address the requirement of minimizing latency for new message reads.
upvoted 1 times
Efren 1 month, 1 week ago
Tricky one, in doubt also with B, read replicas.
upvoted 1 times
nosense 1 month, 1 week ago
Question #473 Topic 1
A company hosts a website on Amazon EC2 instances behind an Application Load Balancer (ALB). The website serves static content. Website traffic is increasing, and the company is concerned about a potential increase in cost.
A. Create an Amazon CloudFront distribution to cache state files at edge locations
B. Create an Amazon ElastiCache cluster. Connect the ALB to the ElastiCache cluster to serve cached files
C. Create an AWS WAF web ACL and associate it with the ALB. Add a rule to the web ACL to cache static files
D. Create a second ALB in an alternative AWS Region. Route user traffic to the closest Region to minimize data transfer costs
Community vote distribution
A (100%)
TariqKipkemei 1 week ago
Serves static content = Amazon CloudFront distribution.
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
Amazon CloudFront: CloudFront is a content delivery network (CDN) service that caches content at edge locations worldwide. By creating a CloudFront distribution, static content from the website can be cached at edge locations, reducing the load on the EC2 instances and improving the overall performance.
Caching Static Files: Since the website serves static content, caching these files at CloudFront edge locations can significantly reduce the number of requests forwarded to the EC2 instances. This helps to lower the overall cost by offloading traffic from the instances and reducing the data transfer costs.
upvoted 2 times
nosense 1 month, 1 week ago
Question #474 Topic 1
A company has multiple VPCs across AWS Regions to support and run workloads that are isolated from workloads in other Regions. Because of a recent application launch requirement, the company’s VPCs must communicate with all other VPCs across all Regions.
Which solution will meet these requirements with the LEAST amount of administrative effort?
A. Use VPC peering to manage VPC communication in a single Region. Use VPC peering across Regions to manage VPC communications.
B. Use AWS Direct Connect gateways across all Regions to connect VPCs across regions and manage VPC communications.
C. Use AWS Transit Gateway to manage VPC communication in a single Region and Transit Gateway peering across Regions to manage VPC communications.
D. Use AWS PrivateLink across all Regions to connect VPCs across Regions and manage VPC communications
Community vote distribution
C (100%)
TariqKipkemei 6 days, 20 hours ago
Definitely C.
Very well explained by @Felix_br
upvoted 1 times
Felix_br 3 weeks, 3 days ago
The correct answer is: C. Use AWS Transit Gateway to manage VPC communication in a single Region and Transit Gateway peering across Regions to manage VPC communications.
AWS Transit Gateway is a network hub that you can use to connect your VPCs and on-premises networks. It provides a single point of control for managing your network traffic, and it can help you to reduce the number of connections that you need to manage.
Transit Gateway peering allows you to connect two Transit Gateways in different Regions. This can help you to create a global network that spans multiple Regions.
To use Transit Gateway to manage VPC communication in a single Region, you would create a Transit Gateway in each Region. You would then attach your VPCs to the Transit Gateway.
To use Transit Gateway peering to manage VPC communication across Regions, you would create a Transit Gateway peering connection between the Transit Gateways in each Region.
upvoted 3 times
TariqKipkemei 6 days, 20 hours ago
thank you for this comprehensive explanation
upvoted 1 times
omoakin 1 month, 1 week ago
Ccccccccccccccccccccc
if you have services in multiple Regions, a Transit Gateway will allow you to access those services with a simpler network configuration.
upvoted 2 times
cloudenthusiast 1 month, 1 week ago
AWS Transit Gateway: Transit Gateway is a highly scalable service that simplifies network connectivity between VPCs and on-premises networks. By using a Transit Gateway in a single Region, you can centralize VPC communication management and reduce administrative effort.
Transit Gateway Peering: Transit Gateway supports peering connections across AWS Regions, allowing you to establish connectivity between VPCs in different Regions without the need for complex VPC peering configurations. This simplifies the management of VPC communications across Regions.
upvoted 4 times
Question #475 Topic 1
A company is designing a containerized application that will use Amazon Elastic Container Service (Amazon ECS). The application needs to
access a shared file system that is highly durable and can recover data to another AWS Region with a recovery point objective (RPO) of 8 hours. The file system needs to provide a mount target m each Availability Zone within a Region.
A solutions architect wants to use AWS Backup to manage the replication to another Region. Which solution will meet these requirements?
A. Amazon FSx for Windows File Server with a Multi-AZ deployment
B. Amazon FSx for NetApp ONTAP with a Multi-AZ deployment
C. Amazon Elastic File System (Amazon EFS) with the Standard storage class
D. Amazon FSx for OpenZFS
Community vote distribution
C (80%) B (20%)
elmogy Highly Voted 4 weeks, 1 day ago
https://aws.amazon.com/efs/faq/ Q: What is Amazon EFS Replication?
EFS Replication can replicate your file system data to another Region or within the same Region without requiring additional infrastructure or a custom process. Amazon EFS Replication automatically and transparently replicates your data to a second file system in a Region or AZ of your choice. You can use the Amazon EFS console, AWS CLI, and APIs to activate replication on an existing file system. EFS Replication is continual and provides a recovery point objective (RPO) and a recovery time objective (RTO) of minutes, helping you meet your compliance and business continuity goals.
upvoted 5 times
TariqKipkemei Most Recent 6 days, 20 hours ago
Both option B and C will support this requirement. https://aws.amazon.com/efs/faq/#:~:text=What%20is%20Amazon%20EFS%20Replication%3F
https://aws.amazon.com/fsx/netapp-ontap/faqs/#:~:text=How%20do%20I%20configure%20cross%2Dregion%20replication%20for%20the%20data%20in%20my%20file%20system%3F
upvoted 1 times
omoakin 4 weeks, 1 day ago
BBBBBBBBBBBBBBB
upvoted 1 times
RainWhisper 1 month ago
Both B and C are feasible.
Amazon FSx for NetApp ONTAP is just way overpriced for a backup storage solution. The keyword to look out for is sub milli seconds latency In real life env, Amazon Elastic File System (Amazon EFS) with the Standard storage class is good enough.
upvoted 3 times
Anmol_1010 1 month ago
Efs, can be mounted only in 1 region So the answer is B
upvoted 1 times
Rob1L 1 month, 1 week ago
C: EFS
upvoted 2 times
y0 1 month, 1 week ago
Selected Answer: C
AWS Backup can manage replication of EFS to another region as mentioned below https://docs.aws.amazon.com/efs/latest/ug/awsbackup.html
upvoted 1 times
norris81 1 month, 1 week ago
https://aws.amazon.com/efs/faq/
During a disaster or fault within an AZ affecting all copies of your data, you might experience loss of data that has not been replicated using Amazon EFS Replication. EFS Replication is designed to meet a recovery point objective (RPO) and recovery time objective (RTO) of minutes. You can use AWS Backup to store additional copies of your file system data and restore them to a new file system in an AZ or Region of your choice. Amazon EFS file system backup data created and managed by AWS Backup is replicated to three AZs and is designed for 99.999999999% (11 nines) durability.
upvoted 1 times
nosense 1 month, 1 week ago
Amazon EFS is a scalable and durable elastic file system that can be used with Amazon ECS. However, it does not support replication to another AWS Region.
upvoted 1 times
elmogy 4 weeks, 1 day ago
it does support replication to another AWS Region check the same link you are replying to :/ https://aws.amazon.com/efs/faq/
Q: What is Amazon EFS Replication?
EFS Replication can replicate your file system data to another Region or within the same Region without requiring additional infrastructure or a custom process. Amazon EFS Replication automatically and transparently replicates your data to a second file system in a Region or AZ of your choice. You can use the Amazon EFS console, AWS CLI, and APIs to activate replication on an existing file system. EFS Replication is continual and provides a recovery point objective (RPO) and a recovery time objective (RTO) of minutes, helping you meet your compliance and business continuity goals.
upvoted 1 times
fakrap 1 month, 1 week ago
To use EFS replication in a Region that is disabled by default, you must first opt in to the Region, so it does support.
upvoted 1 times
nosense 1 month, 1 week ago
shared file system that is highly durable and can recover data
upvoted 2 times
Efren 1 month, 1 week ago
Why not EFS?
upvoted 1 times
Question #476 Topic 1
A company is expecting rapid growth in the near future. A solutions architect needs to configure existing users and grant permissions to new
users on AWS. The solutions architect has decided to create IAM groups. The solutions architect will add the new users to IAM groups based on department.
Which additional action is the MOST secure way to grant permissions to the new users?
A. Apply service control policies (SCPs) to manage access permissions
B. Create IAM roles that have least privilege permission. Attach the roles to the IAM groups
C. Create an IAM policy that grants least privilege permission. Attach the policy to the IAM groups
D. Create IAM roles. Associate the roles with a permissions boundary that defines the maximum permissions
Community vote distribution
C (86%) 14%
Rob1L Highly Voted 1 month, 1 week ago
Option B is incorrect because IAM roles are not directly attached to IAM groups.
upvoted 5 times
RoroJ 1 month ago
IAM Roles can be attached to IAM Groups: https://docs.aws.amazon.com/directoryservice/latest/admin-guide/assign_role.html
upvoted 2 times
antropaws 3 weeks, 2 days ago
Read your own link: You can assign an existing IAM role to an AWS Directory Service user or group. Not to IAM groups.
upvoted 4 times
TariqKipkemei Most Recent 6 days, 20 hours ago
An IAM policy is an object in AWS that, when associated with an identity or resource, defines their permissions. Permissions in the policies determine whether a request is allowed or denied. You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources.
So, option B will also work.
But Since I can only choose one, C would be it.
upvoted 1 times
MrAWSAssociate 1 week, 2 days ago
You can attach up to 10 IAM policy for a 'user group'.
upvoted 1 times
antropaws 3 weeks, 2 days ago
C is the correct one.
upvoted 1 times
Efren 1 month, 1 week ago
Agreed with C https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups_manage_attach-policy.html
Attaching a policy to an IAM user group
upvoted 4 times
nosense 1 month, 1 week ago
imazsyed 1 month, 1 week ago
it should be C
upvoted 3 times
nosense 1 month, 1 week ago
Option C is not as secure as option B because IAM policies are attached to individual users and cannot be used to manage permissions for groups of users.
upvoted 2 times
omoakin 1 month, 1 week ago
IAM Roles manage who has access to your AWS resources, whereas IAM policies control their permissions. A Role with no Policy attached to it won’t have to access any AWS resources. A Policy that is not attached to an IAM role is effectively unused.
upvoted 2 times
Question #477 Topic 1
A group requires permissions to list an Amazon S3 bucket and delete objects from that bucket. An administrator has created the following IAM policy to provide access to the bucket and applied that policy to the group. The group is not able to delete objects in the bucket. The company follows least-privilege access rules.
Which statement should a solutions architect add to the policy to correct bucket access?
A.
B.
C.
D.
Community vote distribution
D (100%)
TariqKipkemei 6 days, 19 hours ago
AncaZalog 1 week, 5 days ago
what's the difference between B and D? on B the statements are just placed in another order
upvoted 1 times
TariqKipkemei 6 days, 19 hours ago
option B action is S3:*. this means all actions. The company follows least-privilege access rules. Hence option D
upvoted 1 times
serepetru 3 weeks, 6 days ago
What is the difference between C and D?
upvoted 2 times
Ta_Les 2 weeks, 1 day ago
the "/" at the end of the last line on D
upvoted 1 times
Rob1L 1 month, 1 week ago
nosense 1 month, 1 week ago
d work
upvoted 4 times
Efren 1 month, 1 week ago
Agreed
upvoted 1 times
Question #478 Topic 1
A law firm needs to share information with the public. The information includes hundreds of files that must be publicly readable. Modifications or deletions of the files by anyone before a designated future date are prohibited.
Which solution will meet these requirements in the MOST secure way?
A. Upload all files to an Amazon S3 bucket that is configured for static website hosting. Grant read-only IAM permissions to any AWS principals that access the S3 bucket until the designated date.
B. Create a new Amazon S3 bucket with S3 Versioning enabled. Use S3 Object Lock with a retention period in accordance with the designated date. Configure the S3 bucket for static website hosting. Set an S3 bucket policy to allow read-only access to the objects.
C. Create a new Amazon S3 bucket with S3 Versioning enabled. Configure an event trigger to run an AWS Lambda function in case of object modification or deletion. Configure the Lambda function to replace the objects with the original versions from a private S3 bucket.
D. Upload all files to an Amazon S3 bucket that is configured for static website hosting. Select the folder that contains the files. Use S3 Object Lock with a retention period in accordance with the designated date. Grant read-only IAM permissions to any AWS principals that access the S3 bucket.
Community vote distribution
B (100%)
TariqKipkemei 6 days, 19 hours ago
Create a new Amazon S3 bucket with S3 Versioning enabled. Use S3 Object Lock with a retention period in accordance with the designated date. Configure the S3 bucket for static website hosting. Set an S3 bucket policy to allow read-only access to the objects.
upvoted 1 times
antropaws 3 weeks, 2 days ago
dydzah 1 month, 1 week ago
nosense 1 month, 1 week ago
Option A allows the files to be modified or deleted by anyone with read-only IAM permissions. Option C allows the files to be modified or deleted by anyone who can trigger the AWS Lambda function.
Option D allows the files to be modified or deleted by anyone with read-only IAM permissions to the S3 bucket
upvoted 3 times
Question #479 Topic 1
A company is making a prototype of the infrastructure for its new website by manually provisioning the necessary infrastructure. This
infrastructure includes an Auto Scaling group, an Application Load Balancer and an Amazon RDS database. After the configuration has been thoroughly validated, the company wants the capability to immediately deploy the infrastructure for development and production use in two Availability Zones in an automated fashion.
What should a solutions architect recommend to meet these requirements?
A. Use AWS Systems Manager to replicate and provision the prototype infrastructure in two Availability Zones
B. Define the infrastructure as a template by using the prototype infrastructure as a guide. Deploy the infrastructure with AWS CloudFormation.
C. Use AWS Config to record the inventory of resources that are used in the prototype infrastructure. Use AWS Config to deploy the prototype infrastructure into two Availability Zones.
D. Use AWS Elastic Beanstalk and configure it to use an automated reference to the prototype infrastructure to automatically deploy new environments in two Availability Zones.
Community vote distribution
B (100%)
haoAWS 4 days, 2 hours ago
Why D is not correct?
upvoted 1 times
wRhlH 3 days, 13 hours ago
I guess "TEMPLATE" leads to CloudFormation
upvoted 1 times
TariqKipkemei 6 days, 19 hours ago
Infrastructure as code = AWS CloudFormation
upvoted 1 times
antropaws 3 weeks, 2 days ago
Felix_br 3 weeks, 3 days ago
AWS CloudFormation is a service that allows you to define and provision infrastructure as code. This means that you can create a template that describes the resources you want to create, and then use CloudFormation to deploy those resources in an automated fashion.
In this case, the solutions architect should define the infrastructure as a template by using the prototype infrastructure as a guide. The template should include resources for an Auto Scaling group, an Application Load Balancer, and an Amazon RDS database. Once the template is created, the solutions architect can use CloudFormation to deploy the infrastructure in two Availability Zones.
upvoted 1 times
omoakin 4 weeks ago
B
Define the infrastructure as a template by using the prototype infrastructure as a guide. Deploy the infrastructure with AWS CloudFormation
upvoted 1 times
nosense 1 month, 1 week ago
Question #480 Topic 1
A business application is hosted on Amazon EC2 and uses Amazon S3 for encrypted object storage. The chief information security officer has directed that no application traffic between the two services should traverse the public internet.
Which capability should the solutions architect use to meet the compliance requirements?
A. AWS Key Management Service (AWS KMS)
B. VPC endpoint
C. Private subnet
D. Virtual private gateway
Community vote distribution
B (100%)
TariqKipkemei 6 days, 19 hours ago
Prevent traffic from traversing the internet = VPC endpoint for S3.
upvoted 1 times
antropaws 3 weeks, 2 days ago
B until proven contrary.
upvoted 1 times
handsonlabsaws 3 weeks, 4 days ago
Blingy 1 month ago
BBBBBBBBB
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
A VPC endpoint enables you to privately access AWS services without requiring internet gateways, NAT gateways, VPN connections, or AWS Direct Connect connections. It allows you to connect your VPC directly to supported AWS services, such as Amazon S3, over a private connection within the AWS network.
By creating a VPC endpoint for Amazon S3, the traffic between your EC2 instances and S3 will stay within the AWS network and won't traverse the public internet. This provides a more secure and compliant solution, as the data transfer remains within the private network boundaries.
upvoted 2 times
Question #481 Topic 1
A company hosts a three-tier web application in the AWS Cloud. A Multi-AZAmazon RDS for MySQL server forms the database layer Amazon ElastiCache forms the cache layer. The company wants a caching strategy that adds or updates data in the cache when a customer adds an item to the database. The data in the cache must always match the data in the database.
Which solution will meet these requirements?
A. Implement the lazy loading caching strategy
B. Implement the write-through caching strategy
C. Implement the adding TTL caching strategy
D. Implement the AWS AppConfig caching strategy
Community vote distribution
B (100%)
cloudenthusiast Highly Voted 1 month, 1 week ago
In the write-through caching strategy, when a customer adds or updates an item in the database, the application first writes the data to the database and then updates the cache with the same data. This ensures that the cache is always synchronized with the database, as every write operation triggers an update to the cache.
upvoted 6 times
cloudenthusiast 1 month, 1 week ago
Lazy loading caching strategy (option A) typically involves populating the cache only when data is requested, and it does not guarantee that the data in the cache always matches the data in the database.
Adding TTL (Time-to-Live) caching strategy (option C) involves setting an expiration time for cached data. It is useful for scenarios where the data can be considered valid for a specific period, but it does not guarantee that the data in the cache is always in sync with the database.
AWS AppConfig caching strategy (option D) is a service that helps you deploy and manage application configurations. It is not specifically designed for caching data synchronization between a database and cache layer.
upvoted 3 times
TariqKipkemei Most Recent 5 days, 20 hours ago
The answer is definitely B.
I couldn't provide any more details than what has been shared by @cloudenthusiast.
upvoted 1 times
nosense 1 month, 1 week ago
write-through caching strategy updates the cache at the same time as the database
upvoted 2 times
Question #482 Topic 1
A company wants to migrate 100 GB of historical data from an on-premises location to an Amazon S3 bucket. The company has a 100 megabits per second (Mbps) internet connection on premises. The company needs to encrypt the data in transit to the S3 bucket. The company will store new data directly in Amazon S3.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use the s3 sync command in the AWS CLI to move the data directly to an S3 bucket
B. Use AWS DataSync to migrate the data from the on-premises location to an S3 bucket
C. Use AWS Snowball to move the data to an S3 bucket
D. Set up an IPsec VPN from the on-premises location to AWS. Use the s3 cp command in the AWS CLI to move the data directly to an S3 bucket
Community vote distribution
B (89%) 11%
TariqKipkemei 5 days, 20 hours ago
AWS DataSync is a secure, online service that automates and accelerates moving data between on premises and AWS Storage services.
upvoted 1 times
vrevkov 1 week, 2 days ago
Why not A?
s3 is already encrypted in transit by TLS.
We need to have the LEAST operational overhead and DataSync implies the installation of Agent whereas AWS CLI is easier to use.
upvoted 2 times
Axeashes 1 week, 6 days ago
https://docs.aws.amazon.com/cli/latest/userguide/cli-services-s3-commands.html
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
AWS DataSync is a fully managed data transfer service that simplifies and automates the process of moving data between on-premises storage and Amazon S3. It provides secure and efficient data transfer with built-in encryption, ensuring that the data is encrypted in transit.
By using AWS DataSync, the company can easily migrate the 100 GB of historical data from their on-premises location to an S3 bucket. DataSync will handle the encryption of data in transit and ensure secure transfer.
upvoted 4 times
luiscc 1 month, 1 week ago
Using DataSync, the company can easily migrate the 100 GB of historical data to an S3 bucket. DataSync will handle the encryption of data in transit, so the company does not need to set up a VPN or worry about managing encryption keys.
Option A, using the s3 sync command in the AWS CLI to move the data directly to an S3 bucket, would require more operational overhead as the company would need to manage the encryption of data in transit themselves. Option D, setting up an IPsec VPN from the on-premises location to AWS, would also require more operational overhead and would be overkill for this scenario. Option C, using AWS Snowball, could work but would require more time and resources to order and set up the physical device.
upvoted 3 times
EA100 1 month, 1 week ago
Answer - A
Use the s3 sync command in the AWS CLI to move the data directly to an S3 bucket.
upvoted 4 times
Question #483 Topic 1
A company containerized a Windows job that runs on .NET 6 Framework under a Windows container. The company wants to run this job in the AWS Cloud. The job runs every 10 minutes. The job’s runtime varies between 1 minute and 3 minutes.
Which solution will meet these requirements MOST cost-effectively?
A. Create an AWS Lambda function based on the container image of the job. Configure Amazon EventBridge to invoke the function every 10 minutes.
B. Use AWS Batch to create a job that uses AWS Fargate resources. Configure the job scheduling to run every 10 minutes.
C. Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate to run the job. Create a scheduled task based on the container image of the job to run every 10 minutes.
D. Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate to run the job. Create a standalone task based on the container image of the job. Use Windows task scheduler to run the job every
10 minutes.
Community vote distribution
C (64%) A (18%) B (18%)
wRhlH 3 days, 12 hours ago
For those wonder why not B
AWS Batch doesn't support Windows containers on either Fargate or EC2 resources. https://docs.aws.amazon.com/batch/latest/userguide/fargate.html#when-to-use-fargate:~:text=AWS%20Batch%20doesn%27t%20support%20Windows%20containers%20on%20either%20Fargate%20or%20EC2%20resources.
upvoted 1 times
mattcl 5 days, 2 hours ago
A: Lambda supports containerized applications
upvoted 1 times
TariqKipkemei 5 days, 20 hours ago
AWS Fargate will bill you based on the amount of vCPU, RAM, OS, CPU architecture, and storage that your containerized apps consume while running on EKS or ECS. From the time you start downloading a container image until the ECS task or EKS pod ends.
Lambda is also an option but will involve some re-architecting, so why take the long route?
upvoted 1 times
MrAWSAssociate 1 week, 2 days ago
The previous status for the company app is within containerization techonoly using .Net. Now the company wants to use one of AWS solution (should not be ECS), so one easy possibility is using Lambda with EventBridge as option "A" declared !
upvoted 1 times
MrAWSAssociate 1 week, 2 days ago
Furthermore, Lambda can create "Container Image" appropriate for the company containerized app.
upvoted 1 times
AnishGS 1 week, 6 days ago
By leveraging AWS Fargate and ECS, you can achieve cost-effective scaling and resource allocation for your containerized Windows job running on
.NET 6 Framework in the AWS Cloud. The serverless nature of Fargate ensures that you only pay for the actual resources consumed by your containers, allowing for efficient cost management.
upvoted 1 times
Axeashes 1 week, 6 days ago
came across this study: https://blogs.perficient.com/2021/06/17/aws-cost-analysis-comparing-lambda-ec2-fargate/
Indicating Fargate as a lower cost than Lamda for little or no idle time - I believe that is the case. .NET6 seems supported on both Lamda and Fargate.
upvoted 1 times
AshishRocks 3 weeks, 2 days ago
By utilizing AWS Fargate to run the containerized Windows job on .NET 6 Framework, and scheduling it using CloudWatch Events, you can achieve cost-effective execution while meeting the job's requirements. C is the answer
upvoted 1 times
omoakin 4 weeks ago
CCCCCCCCCC
upvoted 2 times
PRASAD180 1 month ago
100% C crt
upvoted 2 times
Anmol_1010 1 month, 1 week ago
C for sure
upvoted 1 times
AmrFawzy93 1 month, 1 week ago
By using Amazon ECS on AWS Fargate, you can run the job in a containerized environment while benefiting from the serverless nature of Fargate, where you only pay for the resources used during the job's execution. Creating a scheduled task based on the container image of the job ensures that it runs every 10 minutes, meeting the required schedule. This solution provides flexibility, scalability, and cost-effectiveness.
upvoted 4 times
Rob1L 1 month, 1 week ago
It's A : lambda support .NET 6
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
AWS Batch is a cost-effective service designed to handle batch computing workloads, making it suitable for running periodic jobs like the one described. By utilizing AWS Fargate as the underlying compute environment, you can efficiently run your Windows job without managing the infrastructure. You can configure the job scheduling in AWS Batch to execute the job every 10 minutes.
While option C (using Amazon ECS on AWS Fargate with a scheduled task) is also a valid approach, it may introduce additional complexity as you would need to manage the scheduling of the task separately from AWS Batch.
Therefore, for the given requirements, option B using AWS Batch is the recommended and most cost-effective solution.
upvoted 1 times
norris81 1 month, 1 week ago
https://aws.amazon.com/about-aws/whats-new/2021/10/aws-fargate-amazon-ecs-windows-containers/ https://docs.aws.amazon.com/lambda/latest/dg/images-create.html
Lambda supports only Linux-based container images
upvoted 3 times
exam9391 1 month, 1 week ago
A -> https://docs.aws.amazon.com/lambda/latest/dg/lambda-csharp.html
upvoted 1 times
nosense 1 month, 1 week ago
b most cost effective
upvoted 1 times
Question #484 Topic 1
A company wants to move from many standalone AWS accounts to a consolidated, multi-account architecture. The company plans to create many new AWS accounts for different business units. The company needs to authenticate access to these AWS accounts by using a centralized
corporate directory service.
Which combination of actions should a solutions architect recommend to meet these requirements? (Choose two.)
Create a new organization in AWS Organizations with all features turned on. Create the new AWS accounts in the organization.
Set up an Amazon Cognito identity pool. Configure AWS IAM Identity Center (AWS Single Sign-On) to accept Amazon Cognito authentication.
Configure a service control policy (SCP) to manage the AWS accounts. Add AWS IAM Identity Center (AWS Single Sign-On) to AWS Directory Service.
Create a new organization in AWS Organizations. Configure the organization's authentication mechanism to use AWS Directory Service directly.
Set up AWS IAM Identity Center (AWS Single Sign-On) in the organization. Configure IAM Identity Center, and integrate it with the company's corporate directory service.
Community vote distribution
AE (100%)
samehpalass 5 days, 8 hours ago
A:AWS Organization
E:Authentication because option C (SCP) for Authorization
upvoted 1 times
TariqKipkemei 5 days, 20 hours ago
Create a new organization in AWS Organizations with all features turned on. Create the new AWS accounts in the organization.
Set up AWS IAM Identity Center (AWS Single Sign-On) in the organization. Configure IAM Identity Center, and integrate it with the company's corporate directory service.
AWS IAM Identity Center (successor to AWS Single Sign-On) helps you securely create or connect your workforce identities and manage their access centrally across AWS accounts and applications.
https://aws.amazon.com/iam/identity-center/#:~:text=AWS%20IAM%20Identity%20Center%20(successor%20to%20AWS%20Single%20Sign%2DOn)%20helps%20you%20securely%20cre ate%20or%20connect%20your%20workforce%20identities%20and%20manage%20their%20access%20centrally%20across%20AWS%20accounts%2 0and%20applications.
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
A. By creating a new organization in AWS Organizations, you can establish a consolidated multi-account architecture. This allows you to create and manage multiple AWS accounts for different business units under a single organization.
E. Setting up AWS IAM Identity Center (AWS Single Sign-On) within the organization enables you to integrate it with the company's corporate directory service. This integration allows for centralized authentication, where users can sign in using their corporate credentials and access the AWS accounts within the organization.
Together, these actions create a centralized, multi-account architecture that leverages AWS Organizations for account management and AWS IAM Identity Center (AWS Single Sign-On) for authentication and access control.
upvoted 4 times
nosense 1 month, 1 week ago
ae is right
upvoted 1 times
Question #485 Topic 1
A company is looking for a solution that can store video archives in AWS from old news footage. The company needs to minimize costs and will rarely need to restore these files. When the files are needed, they must be available in a maximum of five minutes.
What is the MOST cost-effective solution?
A. Store the video archives in Amazon S3 Glacier and use Expedited retrievals.
B. Store the video archives in Amazon S3 Glacier and use Standard retrievals.
C. Store the video archives in Amazon S3 Standard-Infrequent Access (S3 Standard-IA).
D. Store the video archives in Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA).
Community vote distribution
A (100%)
cloudenthusiast Highly Voted 1 month, 1 week ago
By choosing Expedited retrievals in Amazon S3 Glacier, you can reduce the retrieval time to minutes, making it suitable for scenarios where quick access is required. Expedited retrievals come with a higher cost per retrieval compared to standard retrievals but provide faster access to your archived data.
upvoted 6 times
TariqKipkemei Most Recent 5 days, 20 hours ago
Expedited retrievals allow you to quickly access your data that's stored in the S3 Glacier Flexible Retrieval storage class or the S3 Intelligent-Tiering Archive Access tier when occasional urgent requests for restoring archives are required. Data accessed by using Expedited retrievals is typically made available within 1–5 minutes.
upvoted 1 times
MrAWSAssociate 1 week, 2 days ago
Doyin8807 4 weeks, 1 day ago
C because A is not the most cost effective
upvoted 1 times
luiscc 1 month, 1 week ago
Expedited retrieval typically takes 1-5 minutes to retrieve data, making it suitable for the company's requirement of having the files available in a maximum of five minutes.
upvoted 3 times
Efren 1 month, 1 week ago
EA100 1 month, 1 week ago
Answer - A
Fast availability: Although retrieval times for objects stored in Amazon S3 Glacier typically range from minutes to hours, you can use the Expedited retrievals option to expedite access to your archives. By using Expedited retrievals, the files can be made available in a maximum of five minutes when needed. However, Expedited retrievals do incur higher costs compared to standard retrievals.
upvoted 1 times
nosense 1 month, 1 week ago
glacier expedited retrieval times of typically 1-5 minutes.
upvoted 2 times
Question #486 Topic 1
A company is building a three-tier application on AWS. The presentation tier will serve a static website The logic tier is a containerized application. This application will store data in a relational database. The company wants to simplify deployment and to reduce operational costs.
Which solution will meet these requirements?
A. Use Amazon S3 to host static content. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate for compute power. Use a managed Amazon RDS cluster for the database.
B. Use Amazon CloudFront to host static content. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 for compute power. Use a managed Amazon RDS cluster for the database.
C. Use Amazon S3 to host static content. Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate for compute power. Use a managed Amazon RDS cluster for the database.
D. Use Amazon EC2 Reserved Instances to host static content. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 for compute power. Use a managed Amazon RDS cluster for the database.
Community vote distribution
A (100%)
TariqKipkemei 2 days, 20 hours ago
Use Amazon S3 to host static content. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate for compute power. Use a managed Amazon RDS cluster for the database
upvoted 1 times
Yadav_Sanjay 1 month, 1 week ago
ECS is slightly cheaper than EKS
upvoted 3 times
cloudenthusiast 1 month, 1 week ago
Amazon S3 is a highly scalable and cost-effective storage service that can be used to host static website content. It provides durability, high availability, and low latency access to the static files.
Amazon ECS with AWS Fargate eliminates the need to manage the underlying infrastructure. It allows you to run containerized applications without provisioning or managing EC2 instances. This reduces operational overhead and provides scalability.
By using a managed Amazon RDS cluster for the database, you can offload the management tasks such as backups, patching, and monitoring to AWS. This reduces the operational burden and ensures high availability and durability of the database.
upvoted 3 times
Question #487 Topic 1
A company seeks a storage solution for its application. The solution must be highly available and scalable. The solution also must function as a file system be mountable by multiple Linux instances in AWS and on premises through native protocols, and have no minimum size requirements. The company has set up a Site-to-Site VPN for access from its on-premises network to its VPC.
Which storage solution meets these requirements?
A. Amazon FSx Multi-AZ deployments
B. Amazon Elastic Block Store (Amazon EBS) Multi-Attach volumes
C. Amazon Elastic File System (Amazon EFS) with multiple mount targets
D. Amazon Elastic File System (Amazon EFS) with a single mount target and multiple access points
Community vote distribution
C (100%)
cloudenthusiast Highly Voted 1 month, 1 week ago
Amazon EFS is a fully managed file system service that provides scalable, shared storage for Amazon EC2 instances. It supports the Network File System version 4 (NFSv4) protocol, which is a native protocol for Linux-based systems. EFS is designed to be highly available, durable, and scalable.
upvoted 5 times
Felix_br Most Recent 3 weeks, 3 days ago
The other options are incorrect for the following reasons:
A. Amazon FSx Multi-AZ deployments Amazon FSx is a managed file system service that provides access to file systems that are hosted on Amazon EC2 instances. Amazon FSx does not support native protocols, such as NFS.
B. Amazon Elastic Block Store (Amazon EBS) Multi-Attach volumes Amazon EBS is a block storage service that provides durable, block-level storage volumes for use with Amazon EC2 instances. Amazon EBS Multi-Attach volumes can be attached to multiple EC2 instances at the same time, but they cannot be mounted by multiple Linux instances through native protocols, such as NFS.
D. Amazon Elastic File System (Amazon EFS) with a single mount target and multiple access points A single mount target can only be used to mount the file system on a single EC2 instance. Multiple access points are used to provide access to the file system from different VPCs.
upvoted 2 times
boubie44 1 month ago
i don't understand why not D?
upvoted 1 times
lucdt4 1 month ago
the requirement is mountable by multiple Linux
-> C (multiple mount targets)
upvoted 2 times
Question #488 Topic 1
A 4-year-old media company is using the AWS Organizations all features feature set to organize its AWS accounts. According to the company's finance team, the billing information on the member accounts must not be accessible to anyone, including the root user of the member accounts.
Which solution will meet these requirements?
A. Add all finance team users to an IAM group. Attach an AWS managed policy named Billing to the group.
B. Attach an identity-based policy to deny access to the billing information to all users, including the root user.
C. Create a service control policy (SCP) to deny access to the billing information. Attach the SCP to the root organizational unit (OU).
D. Convert from the Organizations all features feature set to the Organizations consolidated billing feature set.
Community vote distribution
C (100%)
TariqKipkemei 2 days, 20 hours ago
Service control policy are a type of organization policy that you can use to manage permissions in your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization. SCPs help you to ensure your accounts stay within your organization’s access control guidelines. SCPs are available only in an organization that has all features enabled.
upvoted 1 times
Abrar2022 3 weeks, 3 days ago
By denying access to billing information at the root OU, you can ensure that no member accounts, including root users, have access to the billing information.
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
Service Control Policies (SCP): SCPs are an integral part of AWS Organizations and allow you to set fine-grained permissions on the organizational units (OUs) within your AWS Organization. SCPs provide central control over the maximum permissions that can be granted to member accounts, including the root user.
Denying Access to Billing Information: By creating an SCP and attaching it to the root OU, you can explicitly deny access to billing information for all accounts within the organization. SCPs can be used to restrict access to various AWS services and actions, including billing-related services.
Granular Control: SCPs enable you to define specific permissions and restrictions at the organizational unit level. By denying access to billing information at the root OU, you can ensure that no member accounts, including root users, have access to the billing information.
upvoted 3 times
nosense 1 month, 1 week ago
Question #489 Topic 1
An ecommerce company runs an application in the AWS Cloud that is integrated with an on-premises warehouse solution. The company uses
Amazon Simple Notification Service (Amazon SNS) to send order messages to an on-premises HTTPS endpoint so the warehouse application can process the orders. The local data center team has detected that some of the order messages were not received.
A solutions architect needs to retain messages that are not delivered and analyze the messages for up to 14 days. Which solution will meet these requirements with the LEAST development effort?
A. Configure an Amazon SNS dead letter queue that has an Amazon Kinesis Data Stream target with a retention period of 14 days.
B. Add an Amazon Simple Queue Service (Amazon SQS) queue with a retention period of 14 days between the application and Amazon SNS.
C. Configure an Amazon SNS dead letter queue that has an Amazon Simple Queue Service (Amazon SQS) target with a retention period of 14 days.
D. Configure an Amazon SNS dead letter queue that has an Amazon DynamoDB target with a TTL attribute set for a retention period of 14 days.
Community vote distribution
C (88%) 13%
TariqKipkemei 2 days, 19 hours ago
C is best to handle this requirement. Although good to note that dead-letter queue is an SQS queue.
"A dead-letter queue is an Amazon SQS queue that an Amazon SNS subscription can target for messages that can't be delivered to subscribers successfully. Messages that can't be delivered due to client errors or server errors are held in the dead-letter queue for further analysis or reprocessing."
https://docs.aws.amazon.com/sns/latest/dg/sns-dead-letter-queues.html#:~:text=A%20dead%2Dletter%20queue%20is%20an%20Amazon%20SQS%20queue
upvoted 1 times
Felix_br 3 weeks, 3 days ago
C - Amazon SNS dead letter queues are used to handle messages that are not delivered to their intended recipients. When a message is sent to an Amazon SNS topic, it is first delivered to the topic's subscribers. If a message is not delivered to any of the subscribers, it is sent to the topic's dead letter queue.
Amazon SQS is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. Amazon SQS queues can be configured to have a retention period, which is the amount of time that messages will be kept in the queue before they are deleted.
To meet the requirements of the company, you can configure an Amazon SNS dead letter queue that has an Amazon SQS target with a retention period of 14 days. This will ensure that any messages that are not delivered to the on-premises warehouse application will be stored in the Amazon SQS queue for up to 14 days. The company can then analyze the messages in the Amazon SQS queue to determine why they were not delivered.
upvoted 1 times
Yadav_Sanjay 1 month ago
https://docs.aws.amazon.com/sns/latest/dg/sns-dead-letter-queues.html
upvoted 2 times
Rob1L 1 month, 1 week ago
The message retention period in Amazon SQS can be set between 1 minute and 14 days (the default is 4 days). Therefore, you can configure your SQS DLQ to retain undelivered SNS messages for 14 days. This will enable you to analyze undelivered messages with the least development effort.
upvoted 4 times
nosense 1 month, 1 week ago
A is a good solution, but it requires to modify the application. The application would need to be modified to send messages to the Amazon Kinesis Data Stream instead of the on-premises HTTPS endpoint.
Option B is not a good solution. The application would need to be modified to send messages to the Amazon SQS queue instead of the onpremises HTTPS endpoint.
Option D is not a good solution because Amazon DynamoDB is not designed for storing messages for long periods of time. Option C is the best solution because it does not require any changes to the application
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
By adding an Amazon SQS queue as an intermediary between the application and Amazon SNS, you can retain undelivered messages for analysis. Amazon SQS provides a built-in retention period that allows you to specify how long messages should be retained in the queue. By setting the retention period to 14 days, you can ensure that the undelivered messages are available for analysis within that timeframe. This solution requires minimal development effort as it leverages Amazon SQS's capabilities without the need for custom code development.
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
Amazon Simple Notification Service (Amazon SNS) does not directly support dead letter queues. The dead letter queue feature is available in services like Amazon Simple Queue Service (Amazon SQS) and AWS Lambda, but not in Amazon SNS.
upvoted 2 times
Efren 1 month, 1 week ago
Agree with you
A dead-letter queue is an Amazon SQS queue that an Amazon SNS subscription can target for messages that can't be delivered to subscribers successfully.
upvoted 1 times
Efren 1 month, 1 week ago ChatGP says is SQS.. not sure upvoted 1 times
Efren 1 month, 1 week ago
D for me. you send to SQS and then what? needs to send it to some service where can be readed, if im not wrong
upvoted 1 times
Question #490 Topic 1
A gaming company uses Amazon DynamoDB to store user information such as geographic location, player data, and leaderboards. The company needs to configure continuous backups to an Amazon S3 bucket with a minimal amount of coding. The backups must not affect availability of the application and must not affect the read capacity units (RCUs) that are defined for the table.
Which solution meets these requirements?
A. Use an Amazon EMR cluster. Create an Apache Hive job to back up the data to Amazon S3.
B. Export the data directly from DynamoDB to Amazon S3 with continuous backups. Turn on point-in-time recovery for the table.
C. Configure Amazon DynamoDB Streams. Create an AWS Lambda function to consume the stream and export the data to an Amazon S3 bucket.
D. Create an AWS Lambda function to export the data from the database tables to Amazon S3 on a regular basis. Turn on point-in-time recovery for the table.
Community vote distribution
B (100%)
TariqKipkemei 2 days, 19 hours ago
Using DynamoDB table export, you can export data from an Amazon DynamoDB table from any time within your point-in-time recovery window to an Amazon S3 bucket. Exporting a table does not consume read capacity on the table, and has no impact on table performance and availability.
upvoted 1 times
elmogy 1 month ago
Continuous backups is a native feature of DynamoDB, it works at any scale without having to manage servers or clusters and allows you to export data across AWS Regions and accounts to any point-in-time in the last 35 days at a per-second granularity. Plus, it doesn’t affect the read capacity or the availability of your production tables.
https://aws.amazon.com/blogs/aws/new-export-amazon-dynamodb-table-data-to-data-lake-amazon-s3/
upvoted 4 times
norris81 1 month ago
https://repost.aws/knowledge-center/back-up-dynamodb-s3
https://aws.amazon.com/blogs/aws/new-amazon-dynamodb-continuous-backups-and-point-in-time-recovery-pitr/ https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.Lambda.html
There is no edit
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
Continuous Backups: DynamoDB provides a feature called continuous backups, which automatically backs up your table data. Enabling continuous backups ensures that your table data is continuously backed up without the need for additional coding or manual interventions.
Export to Amazon S3: With continuous backups enabled, DynamoDB can directly export the backups to an Amazon S3 bucket. This eliminates the need for custom coding to export the data.
Minimal Coding: Option B requires the least amount of coding effort as continuous backups and the export to Amazon S3 functionality are built-in features of DynamoDB.
No Impact on Availability and RCUs: Enabling continuous backups and exporting data to Amazon S3 does not affect the availability of your application or the read capacity units (RCUs) defined for the table. These operations happen in the background and do not impact the table's performance or consume additional RCUs.
upvoted 2 times
Efren 1 month, 1 week ago
DynamoDB Export to S3 feature
Using this feature, you can export data from an Amazon DynamoDB table anytime within your point-in-time recovery window to an Amazon S3 bucket.
upvoted 1 times
Efren 1 month, 1 week ago
B also for me
upvoted 1 times
norris81 1 month, 1 week ago
https://repost.aws/knowledge-center/back-up-dynamodb-s3
https://aws.amazon.com/blogs/aws/new-amazon-dynamodb-continuous-backups-and-point-in-time-recovery-pitr/ https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.Lambda.html
upvoted 1 times
Efren 1 month, 1 week ago
you could mention what is the best answer from you :)
upvoted 1 times
Question #491 Topic 1
A solutions architect is designing an asynchronous application to process credit card data validation requests for a bank. The application must be secure and be able to process each request at least once.
Which solution will meet these requirements MOST cost-effectively?
A. Use AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS) standard queues as the event source. Use AWS Key Management Service (SSE-KMS) for encryption. Add the kms:Decrypt permission for the Lambda execution role.
B. Use AWS Lambda event source mapping. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues as the event source. Use SQS managed encryption keys (SSE-SQS) for encryption. Add the encryption key invocation permission for the Lambda function.
C. Use the AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS) FIFO queues as the event source. Use AWS KMS keys (SSE-KMS). Add the kms:Decrypt permission for the Lambda execution role.
D. Use the AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS) standard queues as the event source. Use AWS KMS keys (SSE-KMS) for encryption. Add the encryption key invocation permission for the Lambda function.
Community vote distribution
A (82%) Other
Abrar2022 3 weeks, 3 days ago
at least once and cost effective suggests SQS standard
upvoted 1 times
Felix_br 3 weeks, 3 days ago
Solution B is the most cost-effective solution to meet the requirements of the application.
Amazon Simple Queue Service (SQS) FIFO queues are a good choice for this application because they guarantee that messages are processed in the order in which they are received. This is important for credit card data validation because it ensures that fraudulent transactions are not processed before legitimate transactions.
SQS managed encryption keys (SSE-SQS) are a good choice for encrypting the messages in the SQS queue because they are free to use. AWS Key Management Service (KMS) keys (SSE-KMS) are also a good choice for encrypting the messages, but they do incur a cost.
upvoted 1 times
omoakin 4 weeks ago
AAAAAAAA
upvoted 1 times
elmogy 1 month ago
SQS FIFO is slightly more expensive than standard queue https://calculator.aws/#/addService/SQS
I would still go with the standard because of the keyword "at least once" because FIFO process "exactly once". That leaves us with A and D, I believe that lambda function only needs to decrypt so I would choose A
upvoted 3 times
Yadav_Sanjay 1 month ago
should be A. Key word - at least once and cost effective suggests SQS standard
upvoted 2 times
Efren 1 month, 1 week ago
It has to be default, no FIFO. It doesnt say just one, it says at least once, so that is default queue that is cheaper than FIFO. Between the default options, nto sure to be honest
upvoted 2 times
jayce5 1 month ago
No, when it comes to "credit card data validation," it should be FIFO. If you use the standard approach, there is a chance that people who come after will get processed before those who come first.
upvoted 1 times
awwass 1 month, 1 week ago
awwass 1 month, 1 week ago
This solution uses standard queues in Amazon SQS, which are less expensive than FIFO queues. It also uses AWS Key Management Service (SSE-KMS) for encryption, which is a cost-effective way to encrypt data at rest and in transit. The kms:Decrypt permission is added to the Lambda execution role to allow it to decrypt messages from the queue
upvoted 1 times
Rob1L 1 month, 1 week ago
Options B, C and D involve using SQS FIFO queues, which guarantee exactly once processing, which is more expensive and more than necessary for the requirement of at least once processing.
upvoted 2 times
Efren 1 month, 1 week ago
For me its b, kms:decrypt is an action
upvoted 3 times
nosense 1 month, 1 week ago
not add the kms:Decrypt permission for the Lambda execution role, which means that Lambda will have to decrypt the data on each invocation
upvoted 2 times
Efren 1 month, 1 week ago
ID say then A
upvoted 1 times
nosense 1 month, 1 week ago
Question #492 Topic 1
A company has multiple AWS accounts for development work. Some staff consistently use oversized Amazon EC2 instances, which causes the company to exceed the yearly budget for the development accounts. The company wants to centrally restrict the creation of AWS resources in these accounts.
Which solution will meet these requirements with the LEAST development effort?
A. Develop AWS Systems Manager templates that use an approved EC2 creation process. Use the approved Systems Manager templates to provision EC2 instances.
B. Use AWS Organizations to organize the accounts into organizational units (OUs). Define and attach a service control policy (SCP) to control the usage of EC2 instance types.
C. Configure an Amazon EventBridge rule that invokes an AWS Lambda function when an EC2 instance is created. Stop disallowed EC2 instance types.
D. Set up AWS Service Catalog products for the staff to create the allowed EC2 instance types. Ensure that staff can deploy EC2 instances only by using the Service Catalog products.
Community vote distribution
B (100%)
alexandercamachop 3 weeks, 3 days ago
Anytime you see Multiple AWS Accounts, and needs to consolidate is AWS Organization. Also anytime we need to restrict anything in an organization, it is SCP policies.
upvoted 2 times
Blingy 4 weeks, 1 day ago
BBBBBBBBB
upvoted 1 times
elmogy 1 month ago
I would choose B
The other options would require some level of programming or custom resource creation:
A. Developing Systems Manager templates requires development effort
C. Configuring EventBridge rules and Lambda functions requires development effort
D. Creating Service Catalog products requires development effort to define the allowed EC2 configurations.
Option B - Using Organizations service control policies - requires no custom development. It involves:
Organizing accounts into OUs
Creating an SCP that defines allowed/disallowed EC2 instance types Attaching the SCP to the appropriate OUs
This is a native AWS service with a simple UI for defining and managing policies. No coding or resource creation is needed. So option B, using Organizations service control policies, will meet the requirements with the least development effort.
upvoted 3 times
cloudenthusiast 1 month, 1 week ago
AWS Organizations: AWS Organizations is a service that helps you centrally manage multiple AWS accounts. It enables you to group accounts into organizational units (OUs) and apply policies across those accounts.
Service Control Policies (SCPs): SCPs in AWS Organizations allow you to define fine-grained permissions and restrictions at the account or OU level. By attaching an SCP to the development accounts, you can control the creation and usage of EC2 instance types.
Least Development Effort: Option B requires minimal development effort as it leverages the built-in features of AWS Organizations and SCPs. You can define the SCP to restrict the use of oversized EC2 instance types and apply it to the appropriate OUs or accounts.
upvoted 3 times
Efren 1 month, 1 week ago
B for me as well
upvoted 1 times
Question #493 Topic 1
A company wants to use artificial intelligence (AI) to determine the quality of its customer service calls. The company currently manages calls in four different languages, including English. The company will offer new languages in the future. The company does not have the resources to
regularly maintain machine learning (ML) models.
The company needs to create written sentiment analysis reports from the customer service call recordings. The customer service call recording text must be translated into English.
Which combination of steps will meet these requirements? (Choose three.)
Use Amazon Comprehend to translate the audio recordings into English.
Use Amazon Lex to create the written sentiment analysis reports.
Use Amazon Polly to convert the audio recordings into text.
Use Amazon Transcribe to convert the audio recordings in any language into text.
Use Amazon Translate to translate text in any language to English.
Use Amazon Comprehend to create the sentiment analysis reports.
Community vote distribution
DEF (100%)
HareshPrajapati 4 weeks, 1 day ago
afree with DEF
upvoted 1 times
Blingy 4 weeks, 1 day ago
I’d go with DEF too
upvoted 2 times
elmogy 1 month ago
cloudenthusiast 1 month, 1 week ago
Amazon Transcribe will convert the audio recordings into text, Amazon Translate will translate the text into English, and Amazon Comprehend will perform sentiment analysis on the translated text to generate sentiment analysis reports.
upvoted 4 times
Efren 1 month, 1 week ago agreed as well, weird upvoted 1 times
Question #494 Topic 1
A company uses Amazon EC2 instances to host its internal systems. As part of a deployment operation, an administrator tries to use the AWS CLI to terminate an EC2 instance. However, the administrator receives a 403 (Access Denied) error message.
The administrator is using an IAM role that has the following IAM policy attached:
What is the cause of the unsuccessful request?
The EC2 instance has a resource-based policy with a Deny statement.
The principal has not been specified in the policy statement.
The "Action" field does not grant the actions that are required to terminate the EC2 instance.
The request to terminate the EC2 instance does not originate from the CIDR blocks 192.0.2.0/24 or 203.0.113.0/24.
Community vote distribution
D (100%)
elmogy 1 month ago
" aws:SourceIP " indicates the IP address that is trying to perform the action.
upvoted 1 times
nosense 1 month, 1 week ago
Question #495 Topic 1
A company is conducting an internal audit. The company wants to ensure that the data in an Amazon S3 bucket that is associated with the company’s AWS Lake Formation data lake does not contain sensitive customer or employee data. The company wants to discover personally identifiable information (PII) or financial information, including passport numbers and credit card numbers.
Which solution will meet these requirements?
Configure AWS Audit Manager on the account. Select the Payment Card Industry Data Security Standards (PCI DSS) for auditing.
Configure Amazon S3 Inventory on the S3 bucket Configure Amazon Athena to query the inventory.
Configure Amazon Macie to run a data discovery job that uses managed identifiers for the required data types.
Use Amazon S3 Select to run a report across the S3 bucket.
Community vote distribution
C (100%)
Blingy 4 weeks, 1 day ago Macie = Sensitive PII upvoted 3 times
elmogy 1 month ago
cloudenthusiast 1 month, 1 week ago
Amazon Macie is a service that helps discover, classify, and protect sensitive data stored in AWS. It uses machine learning algorithms and managed identifiers to detect various types of sensitive information, including personally identifiable information (PII) and financial information. By configuring Amazon Macie to run a data discovery job with the appropriate managed identifiers for the required data types (such as passport numbers and credit card numbers), the company can identify and classify any sensitive data present in the S3 bucket.
upvoted 3 times
Question #496 Topic 1
A company uses on-premises servers to host its applications. The company is running out of storage capacity. The applications use both block storage and NFS storage. The company needs a high-performing solution that supports local caching without re-architecting its existing
applications.
Which combination of actions should a solutions architect take to meet these requirements? (Choose two.)
Mount Amazon S3 as a file system to the on-premises servers.
Deploy an AWS Storage Gateway file gateway to replace NFS storage.
Deploy AWS Snowball Edge to provision NFS mounts to on-premises servers.
Deploy an AWS Storage Gateway volume gateway to replace the block storage.
Deploy Amazon Elastic File System (Amazon EFS) volumes and mount them to on-premises servers.
Community vote distribution
BD (100%)
elmogy 1 month ago
local caching is a key feature of AWS Storage Gateway solution https://aws.amazon.com/storagegateway/features/
https://aws.amazon.com/blogs/storage/aws-storage-gateway-increases-cache-4x-and-enhances-bandwidth-throttling/#:~:text=AWS%20Storage%20Gateway%20increases%20cache%204x%20and%20enhances,for%20Volume%20Gateway%20customers%2 0...%205%20Conclusion%20
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
By combining the deployment of an AWS Storage Gateway file gateway and an AWS Storage Gateway volume gateway, the company can address both its block storage and NFS storage needs, while leveraging local caching capabilities for improved performance.
upvoted 3 times
Piccalo 1 month, 1 week ago
B and D is the correct answer
upvoted 1 times
Question #497 Topic 1
A company has a service that reads and writes large amounts of data from an Amazon S3 bucket in the same AWS Region. The service is deployed on Amazon EC2 instances within the private subnet of a VPC. The service communicates with Amazon S3 over a NAT gateway in the public subnet. However, the company wants a solution that will reduce the data output costs.
Which solution will meet these requirements MOST cost-effectively?
Provision a dedicated EC2 NAT instance in the public subnet. Configure the route table for the private subnet to use the elastic network interface of this instance as the destination for all S3 traffic.
Provision a dedicated EC2 NAT instance in the private subnet. Configure the route table for the public subnet to use the elastic network interface of this instance as the destination for all S3 traffic.
Provision a VPC gateway endpoint. Configure the route table for the private subnet to use the gateway endpoint as the route for all S3 traffic.
Provision a second NAT gateway. Configure the route table for the private subnet to use this NAT gateway as the destination for all S3 traffic.
Community vote distribution
C (100%)
elmogy 1 month ago
private subnet needs to communicate with S3 --> VPC endpoint right away
upvoted 2 times
cloudenthusiast 1 month, 1 week ago
A VPC gateway endpoint allows you to privately access Amazon S3 from within your VPC without using a NAT gateway or NAT instance. By provisioning a VPC gateway endpoint for S3, the service in the private subnet can directly communicate with S3 without incurring data transfer costs for traffic going through a NAT gateway.
upvoted 4 times
Question #498 Topic 1
A company uses Amazon S3 to store high-resolution pictures in an S3 bucket. To minimize application changes, the company stores the pictures as the latest version of an S3 object. The company needs to retain only the two most recent versions of the pictures.
The company wants to reduce costs. The company has identified the S3 bucket as a large expense. Which solution will reduce the S3 costs with the LEAST operational overhead?
Use S3 Lifecycle to delete expired object versions and retain the two most recent versions.
Use an AWS Lambda function to check for older versions and delete all but the two most recent versions.
Use S3 Batch Operations to delete noncurrent object versions and retain only the two most recent versions.
Deactivate versioning on the S3 bucket and retain the two most recent versions.
Community vote distribution
A (100%)
antropaws 3 weeks, 2 days ago
Konb 1 month ago
Agree with LONGMEN
upvoted 3 times
cloudenthusiast 1 month, 1 week ago
S3 Lifecycle policies allow you to define rules that automatically transition or expire objects based on their age or other criteria. By configuring an S3 Lifecycle policy to delete expired object versions and retain only the two most recent versions, you can effectively manage the storage costs while maintaining the desired retention policy. This solution is highly automated and requires minimal operational overhead as the lifecycle management is handled by S3 itself.
upvoted 3 times
Question #499 Topic 1
A company needs to minimize the cost of its 1 Gbps AWS Direct Connect connection. The company's average connection utilization is less than 10%. A solutions architect must recommend a solution that will reduce the cost without compromising security.
Which solution will meet these requirements?
Set up a new 1 Gbps Direct Connect connection. Share the connection with another AWS account.
Set up a new 200 Mbps Direct Connect connection in the AWS Management Console.
Contact an AWS Direct Connect Partner to order a 1 Gbps connection. Share the connection with another AWS account.
Contact an AWS Direct Connect Partner to order a 200 Mbps hosted connection for an existing AWS account.
Community vote distribution
D (80%) B (20%)
Abrar2022 3 weeks, 2 days ago
Hosted Connection 50 Mbps, 100 Mbps, 200 Mbps,
Dedicated Connection 1 Gbps, 10 Gbps, and 100 Gbps
upvoted 2 times
omoakin 4 weeks ago
BBBBBBBBBBBBBB
upvoted 1 times
elmogy 1 month ago
company need to setup a cheaper connection (200 M) but B is incorrect because you can only order port speeds of 1, 10, or 100 Gbps for more flexibility you can go with hosted connection, You can order port speeds between 50 Mbps and 10 Gbps.
https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect.html
upvoted 3 times
cloudenthusiast 1 month, 1 week ago
By opting for a lower capacity 200 Mbps connection instead of the 1 Gbps connection, the company can significantly reduce costs. This solution ensures a dedicated and secure connection while aligning with the company's low utilization, resulting in cost savings.
upvoted 3 times
norris81 1 month, 1 week ago
D
For Dedicated Connections, 1 Gbps, 10 Gbps, and 100 Gbps ports are available. For Hosted Connections, connection speeds of 50 Mbps, 100 Mbps, 200 Mbps, 300 Mbps, 400 Mbps, 500 Mbps, 1 Gbps, 2 Gbps, 5 Gbps and 10 Gbps may be ordered from approved AWS Direct Connect Partners. See AWS Direct Connect Partners for more information.
upvoted 4 times
nosense 1 month, 1 week ago
A hosted connection is a lower-cost option that is offered by AWS Direct Connect Partners
upvoted 3 times
Efren 1 month, 1 week ago
Also, there are not 200 MBps direct connection speed.
upvoted 1 times
nosense 1 month, 1 week ago
Hosted Connection 50 Mbps, 100 Mbps, 200 Mbps,
Dedicated Connection 1 Gbps, 10 Gbps, and 100 Gbps
B would require the company to purchase additional hardware or software
upvoted 2 times
Question #500 Topic 1
A company has multiple Windows file servers on premises. The company wants to migrate and consolidate its files into an Amazon FSx for Windows File Server file system. File permissions must be preserved to ensure that access rights do not change.
Which solutions will meet these requirements? (Choose two.)
Deploy AWS DataSync agents on premises. Schedule DataSync tasks to transfer the data to the FSx for Windows File Server file system.
Copy the shares on each file server into Amazon S3 buckets by using the AWS CLI. Schedule AWS DataSync tasks to transfer the data to the FSx for Windows File Server file system.
Remove the drives from each file server. Ship the drives to AWS for import into Amazon S3. Schedule AWS DataSync tasks to transfer the data to the FSx for Windows File Server file system.
Order an AWS Snowcone device. Connect the device to the on-premises network. Launch AWS DataSync agents on the device. Schedule DataSync tasks to transfer the data to the FSx for Windows File Server file system.
Order an AWS Snowball Edge Storage Optimized device. Connect the device to the on-premises network. Copy data to the device by using the AWS CLI. Ship the device back to AWS for import into Amazon S3. Schedule AWS DataSync tasks to transfer the data to the FSx for Windows File Server file system.
Community vote distribution
AD (100%)
elmogy 1 month ago
the key is file permissions are preserved during the migration process. only datasync supports that
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
A This option involves deploying DataSync agents on your on-premises file servers and using DataSync to transfer the data directly to the FSx for Windows File Server. DataSync ensures that file permissions are preserved during the migration process.
D
This option involves using an AWS Snowcone device, a portable data transfer device. You would connect the Snowcone device to your on-premises network, launch DataSync agents on the device, and schedule DataSync tasks to transfer the data to FSx for Windows File Server. DataSync handles the migration process while preserving file permissions.
upvoted 4 times
nosense 1 month, 1 week ago
Option B would require copy the data to Amazon S3 before transferring it to Amazon FSx for Windows File Server Option C would require the company to remove the drives from each file server and ship them to AWS
upvoted 2 times
Question #501 Topic 1
A company wants to ingest customer payment data into the company's data lake in Amazon S3. The company receives payment data every minute on average. The company wants to analyze the payment data in real time. Then the company wants to ingest the data into the data lake.
Which solution will meet these requirements with the MOST operational efficiency?
Use Amazon Kinesis Data Streams to ingest data. Use AWS Lambda to analyze the data in real time.
Use AWS Glue to ingest data. Use Amazon Kinesis Data Analytics to analyze the data in real time.
Use Amazon Kinesis Data Firehose to ingest data. Use Amazon Kinesis Data Analytics to analyze the data in real time.
Use Amazon API Gateway to ingest data. Use AWS Lambda to analyze the data in real time.
Community vote distribution
C (100%)
cloudenthusiast Highly Voted 1 month, 1 week ago
By leveraging the combination of Amazon Kinesis Data Firehose and Amazon Kinesis Data Analytics, you can efficiently ingest and analyze the payment data in real time without the need for manual processing or additional infrastructure management. This solution provides a streamlined and scalable approach to handle continuous data ingestion and analysis requirements.
upvoted 5 times
Axeashes Most Recent 1 week, 6 days ago
Kinesis Data Firehose is near real time (min. 60 sec). - The question is focusing on real time processing/analysis + efficiency -> Kinesis Data Stream is real time ingestion.
upvoted 1 times
Axeashes 1 week, 6 days ago
Unless the intention is real time analytics not real time ingestion !
upvoted 1 times
Anmol_1010 1 month, 1 week ago
Did anyome took tge exam recently, How many questiona were there
upvoted 2 times
omoakin 1 month, 1 week ago
Can we understand why admin's answers are mostly wrong? Or is this done on purpose?
upvoted 2 times
nosense 1 month, 1 week ago
Amazon Kinesis Data Firehose the most optimal variant
upvoted 3 times
kailu 1 month, 1 week ago Shouldn't C be more appropriate? upvoted 3 times
MostofMichelle 3 weeks, 5 days ago
You're right. I believe the answers are wrong on purpose, so good thing votes can be made on answers and discussions are allowed.
upvoted 1 times
Question #502 Topic 1
A company runs a website that uses a content management system (CMS) on Amazon EC2. The CMS runs on a single EC2 instance and uses an Amazon Aurora MySQL Multi-AZ DB instance for the data tier. Website images are stored on an Amazon Elastic Block Store (Amazon EBS) volume that is mounted inside the EC2 instance.
Which combination of actions should a solutions architect take to improve the performance and resilience of the website? (Choose two.)
Move the website images into an Amazon S3 bucket that is mounted on every EC2 instance
Share the website images by using an NFS share from the primary EC2 instance. Mount this share on the other EC2 instances.
Move the website images onto an Amazon Elastic File System (Amazon EFS) file system that is mounted on every EC2 instance.
Create an Amazon Machine Image (AMI) from the existing EC2 instance. Use the AMI to provision new instances behind an Application Load Balancer as part of an Auto Scaling group. Configure the Auto Scaling group to maintain a minimum of two instances. Configure an accelerator in AWS Global Accelerator for the website
Create an Amazon Machine Image (AMI) from the existing EC2 instance. Use the AMI to provision new instances behind an Application Load Balancer as part of an Auto Scaling group. Configure the Auto Scaling group to maintain a minimum of two instances. Configure an Amazon CloudFront distribution for the website.
Community vote distribution
CE (69%) AE (31%)
cloudenthusiast Highly Voted 1 month, 1 week ago
By combining the use of Amazon EFS for shared file storage and Amazon CloudFront for content delivery, you can achieve improved performance and resilience for the website.
upvoted 5 times
mattcl Most Recent 4 days, 5 hours ago
A and E: S3 is perfect for images. Besides is the perfect partner of cloudfront
upvoted 1 times
r3mo 2 weeks, 2 days ago
C,E is the answer.
upvoted 1 times
Abrar2022 3 weeks, 2 days ago
You don't mount S3
upvoted 2 times
omoakin 4 weeks ago
answer is CD
upvoted 2 times
RoroJ 1 month ago
E for sure;
SLA for S3 is 99.9% SLA for EFS is 99.99%
upvoted 2 times
VIad 1 month, 1 week ago
you can mount S3 on EC2 instance:
https://aws.amazon.com/blogs/storage/mounting-amazon-s3-to-an-amazon-ec2-instance-using-a-private-connection-to-s3-file-gateway/
upvoted 3 times
omoakin 1 month, 1 week ago
CE the best CloudFront better choice
upvoted 1 times
udo2020 1 month, 1 week ago
Why not D? I think global accelerator should be the solution because with cloudfront only content will be cached and this is only interesting while dristributing the content.
upvoted 2 times
kapit 1 week, 1 day ago
Not with the global accelerator ( ALB ) NLB will be ok.
upvoted 1 times
norris81 1 month, 1 week ago
C and E
upvoted 2 times
nosense 1 month, 1 week ago
idk for a and e valid
upvoted 1 times
nosense 1 month, 1 week ago
Option C not improve the resilience of the website. The website images will still be stored on a single Amazon EFS file system, which is a single point of failure. This is why a choosed A.
A Option we can mount via fuse
upvoted 1 times
elmogy 1 month ago
where did you get the info of single point of failure from? no sense! https://docs.aws.amazon.com/efs/latest/ug/disaster-recovery-resiliency.html
upvoted 1 times
kailu 1 month, 1 week ago
I agree with E but not with D. It should be C and E imo. Thoughts anyone?
upvoted 2 times
Efren 1 month, 1 week ago
I think same. S3 cannot be mount, i think syntax is wrong
upvoted 1 times
norris81 1 month, 1 week ago You could use fuse, but C and E upvoted 1 times
Question #503 Topic 1
A company runs an infrastructure monitoring service. The company is building a new feature that will enable the service to monitor data in customer AWS accounts. The new feature will call AWS APIs in customer accounts to describe Amazon EC2 instances and read Amazon CloudWatch metrics.
What should the company do to obtain access to customer accounts in the MOST secure way?
Ensure that the customers create an IAM role in their account with read-only EC2 and CloudWatch permissions and a trust policy to the company’s account.
Create a serverless API that implements a token vending machine to provide temporary AWS credentials for a role with read-only EC2 and CloudWatch permissions.
Ensure that the customers create an IAM user in their account with read-only EC2 and CloudWatch permissions. Encrypt and store customer access and secret keys in a secrets management system.
Ensure that the customers create an Amazon Cognito user in their account to use an IAM role with read-only EC2 and CloudWatch permissions. Encrypt and store the Amazon Cognito user and password in a secrets management system.
Community vote distribution
A (100%)
cloudenthusiast Highly Voted 1 month, 1 week ago
By having customers create an IAM role with the necessary permissions in their own accounts, the company can use AWS Identity and Access Management (IAM) to establish cross-account access. The trust policy allows the company's AWS account to assume the customer's IAM role temporarily, granting access to the specified resources (EC2 instances and CloudWatch metrics) within the customer's account. This approach follows the principle of least privilege, as the company only requests the necessary permissions and does not require long-term access keys or user credentials from the customers.
upvoted 5 times
Piccalo Most Recent 1 month, 1 week ago
A. Roles give temporary credentials
upvoted 4 times
Efren 1 month, 1 week ago Agreed . Role is the keyword upvoted 1 times
Question #504 Topic 1
A company needs to connect several VPCs in the us-east-1 Region that span hundreds of AWS accounts. The company's networking team has its own AWS account to manage the cloud network.
What is the MOST operationally efficient solution to connect the VPCs?
A. Set up VPC peering connections between each VPC. Update each associated subnet’s route table
B. Configure a NAT gateway and an internet gateway in each VPC to connect each VPC through the internet
C. Create an AWS Transit Gateway in the networking team’s AWS account. Configure static routes from each VPC.
D. Deploy VPN gateways in each VPC. Create a transit VPC in the networking team’s AWS account to connect to each VPC.
Community vote distribution
C (100%)
MirKhobaeb 1 month ago
Answer is C
upvoted 1 times
MirKhobaeb 1 month ago
A transit gateway is a network transit hub that you can use to interconnect your virtual private clouds (VPCs) and on-premises networks. As your cloud infrastructure expands globally, inter-Region peering connects transit gateways together using the AWS Global Infrastructure. Your data is automatically encrypted and never travels over the public internet.
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
AWS Transit Gateway is a highly scalable and centralized hub for connecting multiple VPCs, on-premises networks, and remote networks. It simplifies network connectivity by providing a single entry point and reducing the number of connections required. In this scenario, deploying an AWS Transit Gateway in the networking team's AWS account allows for efficient management and control over the network connectivity across multiple VPCs.
upvoted 4 times
nosense 1 month, 1 week ago
nosense 1 month, 1 week ago
An AWS Transit Gateway is a highly scalable and secure way to connect VPCs in multiple AWS accounts. It is a central hub that routes traffic between VPCs, on-premises networks, and AWS services.
upvoted 3 times
Question #505 Topic 1
A company has Amazon EC2 instances that run nightly batch jobs to process data. The EC2 instances run in an Auto Scaling group that uses On-Demand billing. If a job fails on one instance, another instance will reprocess the job. The batch jobs run between 12:00 AM and 06:00 AM local time every day.
Which solution will provide EC2 instances to meet these requirements MOST cost-effectively?
A. Purchase a 1-year Savings Plan for Amazon EC2 that covers the instance family of the Auto Scaling group that the batch job uses.
B. Purchase a 1-year Reserved Instance for the specific instance type and operating system of the instances in the Auto Scaling group that the batch job uses.
C. Create a new launch template for the Auto Scaling group. Set the instances to Spot Instances. Set a policy to scale out based on CPU usage.
D. Create a new launch template for the Auto Scaling group. Increase the instance size. Set a policy to scale out based on CPU usage.
Community vote distribution
C (100%)
wRhlH 2 days, 22 hours ago
" If a job fails on one instance, another instance will reprocess the job". This ensures Spot Instances are enough for this case
upvoted 1 times
Abrar2022 3 weeks, 2 days ago
Since your batch jobs run for a specific period each day, using Spot Instances with the ability to scale out based on CPU usage is a more cost-effective choice.
upvoted 1 times
Blingy 4 weeks, 1 day ago
C FOR ME COS OF SPOT INSTACES
upvoted 2 times
udo2020 1 month, 1 week ago
First I think it is B but because of cost saving I think it should be C spot instances.
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
Purchasing a 1-year Savings Plan (option A) or a 1-year Reserved Instance (option B) may provide cost savings, but they are more suitable for long-running, steady-state workloads. Since your batch jobs run for a specific period each day, using Spot Instances with the ability to scale out based on CPU usage is a more cost-effective choice.
upvoted 4 times
nosense 1 month, 1 week ago
Question #506 Topic 1
A social media company is building a feature for its website. The feature will give users the ability to upload photos. The company expects significant increases in demand during large events and must ensure that the website can handle the upload traffic from users.
Which solution meets these requirements with the MOST scalability?
A. Upload files from the user's browser to the application servers. Transfer the files to an Amazon S3 bucket.
B. Provision an AWS Storage Gateway file gateway. Upload files directly from the user's browser to the file gateway.
C. Generate Amazon S3 presigned URLs in the application. Upload files directly from the user's browser into an S3 bucket.
D. Provision an Amazon Elastic File System (Amazon EFS) file system. Upload files directly from the user's browser to the file system.
Community vote distribution
C (100%)
cloudenthusiast Highly Voted 1 month, 1 week ago
This approach allows users to upload files directly to S3 without passing through the application servers, reducing the load on the application and improving scalability. It leverages the client-side capabilities to handle the file uploads and offloads the processing to S3.
upvoted 6 times
nosense Most Recent 1 month, 1 week ago
the most scalable because it allows users to upload files directly to Amazon S3,
upvoted 3 times
Question #507 Topic 1
A company has a web application for travel ticketing. The application is based on a database that runs in a single data center in North America. The company wants to expand the application to serve a global user base. The company needs to deploy the application to multiple AWS Regions. Average latency must be less than 1 second on updates to the reservation database.
The company wants to have separate deployments of its web platform across multiple Regions. However, the company must maintain a single primary reservation database that is globally consistent.
Which solution should a solutions architect recommend to meet these requirements?
A. Convert the application to use Amazon DynamoDB. Use a global table for the center reservation table. Use the correct Regional endpoint in each Regional deployment.
B. Migrate the database to an Amazon Aurora MySQL database. Deploy Aurora Read Replicas in each Region. Use the correct Regional endpoint in each Regional deployment for access to the database.
C. Migrate the database to an Amazon RDS for MySQL database. Deploy MySQL read replicas in each Region. Use the correct Regional endpoint in each Regional deployment for access to the database.
D. Migrate the application to an Amazon Aurora Serverless database. Deploy instances of the database to each Region. Use the correct
Regional endpoint in each Regional deployment to access the database. Use AWS Lambda functions to process event streams in each Region to synchronize the databases.
Community vote distribution
A (76%) B (24%)
cloudenthusiast Highly Voted 1 month, 1 week ago
Using DynamoDB's global tables feature, you can achieve a globally consistent reservation database with low latency on updates, making it suitable for serving a global user base. The automatic replication provided by DynamoDB eliminates the need for manual synchronization between Regions.
upvoted 6 times
mattcl Most Recent 4 days, 5 hours ago
B "An Aurora Global Database uses storage-based replication to replicate a database across multiple Regions, with typical latency of less than one second"
upvoted 1 times
live_reply_developers 5 days, 12 hours ago
DrWatson 3 weeks, 1 day ago
antropaws 3 weeks, 2 days ago
It's B:
https://aws.amazon.com/blogs/architecture/using-amazon-aurora-global-database-for-low-latency-without-application-changes/
upvoted 1 times
vrevkov 1 week, 2 days ago
There is no Aurora Global, but simple Aurora
upvoted 2 times
Abrar2022 3 weeks, 2 days ago
Convert the application to use Amazon DynamoDB. Use a global table for the center reservation table. Use the correct Regional endpoint in each Regional deployment.
upvoted 1 times
omoakin 4 weeks ago
BBBBBBBBBBBB
upvoted 1 times
nosense 1 month, 1 week ago
this is why b for me
upvoted 2 times
nosense 1 month, 1 week ago
A is not scalable because Amazon DynamoDB is a NoSQL database that is not designed for global consistency.
C This solution is not as scalable as Amazon Aurora because Amazon RDS for MySQL does not support read replicas in multiple Regions.
upvoted 1 times
Abrar2022 1 week, 4 days ago
?????? DynamoDB is not scalable????????
upvoted 1 times
dacosa 1 month, 1 week ago
Convert the application to use Amazon DynamoDB. Use a global table for the center reservation table. Use the correct Regional endpoint in each Regional deployment.
upvoted 3 times
Efren 1 month, 1 week ago
For me same, Dynamo DB with global tables
upvoted 1 times
Question #508 Topic 1
A company has migrated multiple Microsoft Windows Server workloads to Amazon EC2 instances that run in the us-west-1 Region. The company manually backs up the workloads to create an image as needed.
In the event of a natural disaster in the us-west-1 Region, the company wants to recover workloads quickly in the us-west-2 Region. The company wants no more than 24 hours of data loss on the EC2 instances. The company also wants to automate any backups of the EC2 instances.
Which solutions will meet these requirements with the LEAST administrative effort? (Choose two.)
Create an Amazon EC2-backed Amazon Machine Image (AMI) lifecycle policy to create a backup based on tags. Schedule the backup to run twice daily. Copy the image on demand.
Create an Amazon EC2-backed Amazon Machine Image (AMI) lifecycle policy to create a backup based on tags. Schedule the backup to run twice daily. Configure the copy to the us-west-2 Region.
Create backup vaults in us-west-1 and in us-west-2 by using AWS Backup. Create a backup plan for the EC2 instances based on tag values. Create an AWS Lambda function to run as a scheduled job to copy the backup data to us-west-2.
Create a backup vault by using AWS Backup. Use AWS Backup to create a backup plan for the EC2 instances based on tag values. Define the destination for the copy as us-west-2. Specify the backup schedule to run twice daily.
Create a backup vault by using AWS Backup. Use AWS Backup to create a backup plan for the EC2 instances based on tag values. Specify the backup schedule to run twice daily. Copy on demand to us-west-2.
Community vote distribution
BD (100%)
antropaws 3 weeks, 2 days ago
I also vote B and D.
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
Option B suggests using an EC2-backed Amazon Machine Image (AMI) lifecycle policy to automate the backup process. By configuring the policy to run twice daily and specifying the copy to the us-west-2 Region, the company can ensure regular backups are created and copied to the alternate region.
Option D proposes using AWS Backup, which provides a centralized backup management solution. By creating a backup vault and backup plan based on tag values, the company can automate the backup process for the EC2 instances. The backup schedule can be set to run twice daily, and the destination for the copy can be defined as the us-west-2 Region.
upvoted 4 times
cloudenthusiast 1 month, 1 week ago
Both options automate the backup process and include copying the backups to the us-west-2 Region, ensuring data resilience in the event of a disaster. These solutions minimize administrative effort by leveraging automated backup and copy mechanisms provided by AWS services.
upvoted 2 times
nosense 1 month, 1 week ago
solutions are both automated and require no manual intervention to create or copy backups
upvoted 4 times
Question #509 Topic 1
A company operates a two-tier application for image processing. The application uses two Availability Zones, each with one public subnet and one private subnet. An Application Load Balancer (ALB) for the web tier uses the public subnets. Amazon EC2 instances for the application tier use
the private subnets.
Users report that the application is running more slowly than expected. A security audit of the web server log files shows that the application is
receiving millions of illegitimate requests from a small number of IP addresses. A solutions architect needs to resolve the immediate performance problem while the company investigates a more permanent solution.
What should the solutions architect recommend to meet this requirement?
Modify the inbound security group for the web tier. Add a deny rule for the IP addresses that are consuming resources.
Modify the network ACL for the web tier subnets. Add an inbound deny rule for the IP addresses that are consuming resources.
Modify the inbound security group for the application tier. Add a deny rule for the IP addresses that are consuming resources.
Modify the network ACL for the application tier subnets. Add an inbound deny rule for the IP addresses that are consuming resources.
Community vote distribution
B (82%) A (18%)
lucdt4 1 month ago
A wrong because security group can't deny (only allow)
upvoted 4 times
fakrap 1 month, 1 week ago
A is wrong because you cannot put any deny in security group
upvoted 2 times
Rob1L 1 month, 1 week ago
You cannot Deny on SG, so it's B
upvoted 4 times
cloudenthusiast 1 month, 1 week ago
In this scenario, the security audit reveals that the application is receiving millions of illegitimate requests from a small number of IP addresses. To address this issue, it is recommended to modify the network ACL (Access Control List) for the web tier subnets.
By adding an inbound deny rule specifically targeting the IP addresses that are consuming resources, the network ACL can block the illegitimate traffic at the subnet level before it reaches the web servers. This will help alleviate the excessive load on the web tier and improve the application's performance.
upvoted 4 times
nosense 1 month, 1 week ago
Option B is not as effective as option A
upvoted 3 times
cloudenthusiast 1 month, 1 week ago
A and C out due to the fact that SG does not have deny on allow rules.
upvoted 2 times
y0 1 month, 1 week ago
Security group only have allow rules
upvoted 1 times
nosense 1 month, 1 week ago yeah, my mistake. B should be upvoted 1 times
Question #510 Topic 1
A global marketing company has applications that run in the ap-southeast-2 Region and the eu-west-1 Region. Applications that run in a VPC in eu-west-1 need to communicate securely with databases that run in a VPC in ap-southeast-2.
Which network design will meet these requirements?
Create a VPC peering connection between the eu-west-1 VPC and the ap-southeast-2 VPC. Create an inbound rule in the eu-west-1 application security group that allows traffic from the database server IP addresses in the ap-southeast-2 security group.
Configure a VPC peering connection between the ap-southeast-2 VPC and the eu-west-1 VPC. Update the subnet route tables. Create an inbound rule in the ap-southeast-2 database security group that references the security group ID of the application servers in eu-west-1.
Configure a VPC peering connection between the ap-southeast-2 VPC and the eu-west-1 VPUpdate the subnet route tables. Create an inbound rule in the ap-southeast-2 database security group that allows traffic from the eu-west-1 application server IP addresses.
Create a transit gateway with a peering attachment between the eu-west-1 VPC and the ap-southeast-2 VPC. After the transit gateways are properly peered and routing is configured, create an inbound rule in the database security group that references the security group ID of the
application servers in eu-west-1.
Community vote distribution
C (55%) B (45%)
Chris22usa 21 hours, 59 minutes ago
post it on ChaptGpt and it give me answer D. what heck with this?
upvoted 1 times
haoAWS 2 days, 3 hours ago
B is wrong because It is in a different region, so reference to the security group ID will not work. A is wrong because you need to update the route table. The answer should be C.
upvoted 1 times
mattcl 4 days, 5 hours ago
is B. what happens if application server IP addresses changes (Option C). You must change manually the IP in the security group again.
upvoted 1 times
antropaws 6 days, 16 hours ago
I thought B, but I vote C after checking Axeashes response.
upvoted 1 times
Axeashes 1 week, 5 days ago
"You cannot reference the security group of a peer VPC that's in a different Region. Instead, use the CIDR block of the peer VPC." https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-security-groups.html
upvoted 3 times
HelioNeto 3 weeks, 5 days ago
I think the answer is C because the security groups are in different VPCs. When the question wants to allow traffic from app vpc to database vpc i think using peering connection you will be able to add the security groups rules using private ip addresses of app servers. I don't think the database VPC will identify the security group id of another VPC.
upvoted 1 times
REzirezi 1 month, 1 week ago
D You cannot create a VPC peering connection between VPCs in different regions.
upvoted 2 times
fakrap 1 month, 1 week ago
You can peer any two VPCs in different Regions, as long as they have distinct, non-overlapping CIDR blocks. This ensures that all of the private IP addresses are unique, and it allows all of the resources in the VPCs to address each other without the need for any form of network address translation (NAT).
upvoted 1 times
RainWhisper 1 month, 1 week ago
You can peer any two VPCs in different Regions, as long as they have distinct, non-overlapping CIDR blocks https://docs.aws.amazon.com/devicefarm/latest/developerguide/amazon-vpc-cross-region.html
upvoted 1 times
nosense 1 month, 1 week ago
b for me. bcs correct inbound rule, and not overhead
upvoted 2 times
cloudenthusiast 1 month, 1 week ago
Option B suggests configuring a VPC peering connection between the ap-southeast-2 VPC and the eu-west-1 VPC. By establishing this peering connection, the VPCs can communicate with each other over their private IP addresses.
Additionally, updating the subnet route tables is necessary to ensure that the traffic destined for the remote VPC is correctly routed through the VPC peering connection.
To secure the communication, an inbound rule is created in the ap-southeast-2 database security group. This rule references the security group ID of the application servers in the eu-west-1 VPC, allowing traffic only from those instances. This approach ensures that only the authorized application servers can access the databases in the ap-southeast-2 VPC.
upvoted 3 times
Question #511 Topic 1
A company is developing software that uses a PostgreSQL database schema. The company needs to configure multiple development
environments and databases for the company's developers. On average, each development environment is used for half of the 8-hour workday. Which solution will meet these requirements MOST cost-effectively?
Configure each development environment with its own Amazon Aurora PostgreSQL database
Configure each development environment with its own Amazon RDS for PostgreSQL Single-AZ DB instances
Configure each development environment with its own Amazon Aurora On-Demand PostgreSQL-Compatible database
Configure each development environment with its own Amazon S3 bucket by using Amazon S3 Object Select
Community vote distribution
C (73%) B (27%)
cloudenthusiast Highly Voted 1 month, 1 week ago
Option C suggests using Amazon Aurora On-Demand PostgreSQL-Compatible databases for each development environment. This option provides the benefits of Amazon Aurora, which is a high-performance and scalable database engine, while allowing you to pay for usage on an on-demand basis. Amazon Aurora On-Demand instances are typically more cost-effective for individual development environments compared to the provisioned capacity options.
upvoted 6 times
cloudenthusiast 1 month, 1 week ago
Option B suggests using Amazon RDS for PostgreSQL Single-AZ DB instances for each development environment. While Amazon RDS is a reliable and cost-effective option, it may have slightly higher costs compared to Amazon Aurora On-Demand instances.
upvoted 3 times
MrAWSAssociate Most Recent 1 week, 2 days ago
C, more specific "Aurora Serverless V2", check the link: https://aws.amazon.com/rds/aurora/serverless/
upvoted 1 times
nuri92 1 week, 6 days ago
Bill1000 3 weeks, 1 day ago
With Aurora Serverless, you create a database, specify the desired database capacity range, and connect your applications. You pay on a per-second basis for the database capacity that you use when the database is active, and migrate between standard and serverless configurations with a few steps in the Amazon Relational Database Service (Amazon RDS) console.
upvoted 1 times
Felix_br 3 weeks, 3 days ago
Amazon Aurora On-Demand is a pay-per-use deployment option for Amazon Aurora that allows you to create and destroy database instances as needed. This is ideal for development environments that are only used for part of the day, as you only pay for the database instance when it is in use.
The other options are not as cost-effective. Option A, configuring each development environment with its own Amazon Aurora PostgreSQL database, would require you to pay for the database instance even when it is not in use. Option B, configuring each development environment with its own Amazon RDS for PostgreSQL Single-AZ DB instance, would also require you to pay for the database instance even when it is not in use.
Option D, configuring each development environment with its own Amazon S3 bucket by using Amazon S3 Object Select, is not a viable option as Amazon S3 is not a database.
upvoted 1 times
elmogy 1 month ago
Option B would be the most cost-effective solution for configuring development environments. Amazon RDS for PostgreSQL Single-AZ DB instances would provide a cost-effective solution for a development environment. Amazon Aurora has higher cost than RDS (20% more)
upvoted 1 times
Rob1L 1 month, 1 week ago
Amazon Aurora, whether On-Demand or not (Option A and C), provides higher performance and is more intended for production environments. It also typically has a higher cost compared to RDS,
upvoted 2 times
Anmol_1010 1 month, 1 week ago
Its B the most cost effective if it was preformance then it would be option A
upvoted 1 times
nosense 1 month, 1 week ago
Question #512 Topic 1
A company uses AWS Organizations with resources tagged by account. The company also uses AWS Backup to back up its AWS infrastructure resources. The company needs to back up all AWS resources.
Which solution will meet these requirements with the LEAST operational overhead?
Use AWS Config to identify all untagged resources. Tag the identified resources programmatically. Use tags in the backup plan.
Use AWS Config to identify all resources that are not running. Add those resources to the backup vault.
Require all AWS account owners to review their resources to identify the resources that need to be backed up.
Use Amazon Inspector to identify all noncompliant resources.
Community vote distribution
A (100%)
Bill1000 3 weeks, 1 day ago
Vote A
upvoted 1 times
nosense 1 month, 1 week ago
cloudenthusiast 1 month, 1 week ago
This solution allows you to leverage AWS Config to identify any untagged resources within your AWS Organizations accounts. Once identified, you can programmatically apply the necessary tags to indicate the backup requirements for each resource. By using tags in the backup plan configuration, you can ensure that only the tagged resources are included in the backup process, reducing operational overhead and ensuring all necessary resources are backed up.
upvoted 3 times
Question #513 Topic 1
A social media company wants to allow its users to upload images in an application that is hosted in the AWS Cloud. The company needs a solution that automatically resizes the images so that the images can be displayed on multiple device types. The application experiences
unpredictable traffic patterns throughout the day. The company is seeking a highly available solution that maximizes scalability. What should a solutions architect do to meet these requirements?
Create a static website hosted in Amazon S3 that invokes AWS Lambda functions to resize the images and store the images in an Amazon S3 bucket.
Create a static website hosted in Amazon CloudFront that invokes AWS Step Functions to resize the images and store the images in an Amazon RDS database.
Create a dynamic website hosted on a web server that runs on an Amazon EC2 instance. Configure a process that runs on the EC2 instance to resize the images and store the images in an Amazon S3 bucket.
Create a dynamic website hosted on an automatically scaling Amazon Elastic Container Service (Amazon ECS) cluster that creates a resize job in Amazon Simple Queue Service (Amazon SQS). Set up an image-resizing program that runs on an Amazon EC2 instance to process the resize jobs.
Community vote distribution
A (100%)
cloudenthusiast Highly Voted 1 month, 1 week ago
By using Amazon S3 and AWS Lambda together, you can create a serverless architecture that provides highly scalable and available image resizing capabilities. Here's how the solution would work:
Set up an Amazon S3 bucket to store the original images uploaded by users.
Configure an event trigger on the S3 bucket to invoke an AWS Lambda function whenever a new image is uploaded.
The Lambda function can be designed to retrieve the uploaded image, perform the necessary resizing operations based on device requirements, and store the resized images back in the S3 bucket or a different bucket designated for resized images.
Configure the Amazon S3 bucket to make the resized images publicly accessible for serving to users.
upvoted 9 times
Question #514 Topic 1
A company is running a microservices application on Amazon EC2 instances. The company wants to migrate the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster for scalability. The company must configure the Amazon EKS control plane with endpoint private access set to true and endpoint public access set to false to maintain security compliance. The company must also put the data plane in private subnets. However, the company has received error notifications because the node cannot join the cluster.
Which solution will allow the node to join the cluster?
Grant the required permission in AWS Identity and Access Management (IAM) to the AmazonEKSNodeRole IAM role.
Create interface VPC endpoints to allow nodes to access the control plane.
Recreate nodes in the public subnet. Restrict security groups for EC2 nodes.
Allow outbound traffic in the security group of the nodes.
Community vote distribution
B (57%) A (43%)
cloudenthusiast Highly Voted 1 month, 1 week ago
By creating interface VPC endpoints, you can enable the necessary communication between the Amazon EKS control plane and the nodes in private subnets. This solution ensures that the control plane maintains endpoint private access (set to true) and endpoint public access (set to false) for security compliance.
upvoted 5 times
vrevkov Most Recent 1 week, 2 days ago
This is A because the control plane and data plane nodes are in the same VPC and data plane nodes don't need any interface VPC endpoints, but they definitely need to have IAM role with correct permissions.
https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html
upvoted 2 times
CVliner 6 days, 12 hours ago
Please be noted, that A fits only for security for nodes (not cluster) For cluster we have to write IAM role name eksClusterRole. https://docs.aws.amazon.com/eks/latest/userguide/service_IAM_role.html
upvoted 2 times
antropaws 3 weeks, 2 days ago
The question is:
Which solution will allow the node to join the cluster? The answer is A:
Amazon EKS node IAM role
Nodes receive permissions for these API calls through an IAM instance profile and associated policies. Before you can launch nodes and register them into a cluster, you must create an IAM role for those nodes to use when they are launched. This requirement applies to nodes launched with the Amazon EKS optimized AMI provided by Amazon, or with any other node AMIs that you intend to use.
https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html
upvoted 3 times
elmogy 1 month ago
Kubernetes API requests within your cluster's VPC (such as node to control plane communication) use the private VPC endpoint.
https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html
upvoted 4 times
y0 1 month, 1 week ago
Check this : https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html Also, EKS does not require VPC endpoints. This is not the right use case for EKS
upvoted 4 times
nosense 1 month, 1 week ago
Question #515 Topic 1
A company is migrating an on-premises application to AWS. The company wants to use Amazon Redshift as a solution. Which use cases are suitable for Amazon Redshift in this scenario? (Choose three.)
Supporting data APIs to access data with traditional, containerized, and event-driven applications
Supporting client-side and server-side encryption
Building analytics workloads during specified hours and when the application is not active
Caching data to reduce the pressure on the backend database
Scaling globally to support petabytes of data and tens of millions of requests per minute
Creating a secondary replica of the cluster by using the AWS Management Console
Community vote distribution
BCE (89%) 11%
elmogy 1 month ago
Amazon Redshift is a data warehouse solution, so it is suitable for:
-Supporting encryption (client-side and server-side)
-Handling analytics workloads, especially during off-peak hours when the application is less active
-Scaling to large amounts of data and high query volumes for analytics purposes
The following options are incorrect because:
A) Data APIs are not typically used with Redshift. It is more for running SQL queries and analytics.
D) Redshift is not typically used for caching data. It is for analytics and data warehouse purposes.
F) Redshift clusters do not create replicas in the management console. They are standalone clusters. you could create DR cluster from snapshot and restore to another region (automated or manual) but I do not think this what is meant in this option.
upvoted 4 times
Rob1L 1 month, 1 week ago
Supporting client-side and server-side encryption: Amazon Redshift supports both client-side and server-side encryption for improved data security.
Building analytics workloads during specified hours and when the application is not active: Amazon Redshift is optimized for running complex analytic queries against very large datasets, making it a good choice for this use case.
E. Scaling globally to support petabytes of data and tens of millions of requests per minute: Amazon Redshift is designed to handle petabytes of data, and to deliver fast query and I/O performance for virtually any size dataset.
upvoted 4 times
omoakin 1 month, 1 week ago
CEF for me
upvoted 2 times
Efren 1 month, 1 week ago
A seems correct
The Data API enables you to seamlessly access data from Redshift Serverless with all types of traditional, cloud-native, and containerized serverless web service-based applications and event-driven applications.
upvoted 1 times
Efren 1 month, 1 week ago
BCE for me
upvoted 1 times
y0 1 month, 1 week ago U mean ACE rite? upvoted 1 times
Efren 1 month, 1 week ago
Yeah not sure, but i would say ACE
upvoted 1 times
nosense 1 month, 1 week ago
b it's working, but not primary
upvoted 1 times
Question #516 Topic 1
A company provides an API interface to customers so the customers can retrieve their financial information. Еhe company expects a larger number of requests during peak usage times of the year.
The company requires the API to respond consistently with low latency to ensure customer satisfaction. The company needs to provide a compute host for the API.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use an Application Load Balancer and Amazon Elastic Container Service (Amazon ECS).
B. Use Amazon API Gateway and AWS Lambda functions with provisioned concurrency.
C. Use an Application Load Balancer and an Amazon Elastic Kubernetes Service (Amazon EKS) cluster.
D. Use Amazon API Gateway and AWS Lambda functions with reserved concurrency.
Community vote distribution
B (100%)
cloudenthusiast Highly Voted 1 month, 1 week ago
In the context of the given scenario, where the company wants low latency and consistent performance for their API during peak usage times, it would be more suitable to use provisioned concurrency. By allocating a specific number of concurrent executions, the company can ensure that there are enough function instances available to handle the expected load and minimize the impact of cold starts. This will result in lower latency and improved performance for the API.
upvoted 5 times
MirKhobaeb Most Recent 1 month ago
AWS Lambda provides a highly scalable and distributed infrastructure that automatically manages the underlying compute resources. It automatically scales your API based on the incoming request load, allowing it to respond consistently with low latency, even during peak times. AWS Lambda takes care of infrastructure provisioning, scaling, and resource management, allowing you to focus on writing the code for your API logic.
upvoted 3 times
Question #517 Topic 1
A company wants to send all AWS Systems Manager Session Manager logs to an Amazon S3 bucket for archival purposes. Which solution will meet this requirement with the MOST operational efficiency?
A. Enable S3 logging in the Systems Manager console. Choose an S3 bucket to send the session data to.
B. Install the Amazon CloudWatch agent. Push all logs to a CloudWatch log group. Export the logs to an S3 bucket from the group for archival purposes.
C. Create a Systems Manager document to upload all server logs to a central S3 bucket. Use Amazon EventBridge to run the Systems Manager document against all servers that are in the account daily.
D. Install an Amazon CloudWatch agent. Push all logs to a CloudWatch log group. Create a CloudWatch logs subscription that pushes any incoming log events to an Amazon Kinesis Data Firehose delivery stream. Set Amazon S3 as the destination.
Community vote distribution
A (82%) B (18%)
Zuit 10 hours, 17 minutes ago
GPT argued for D.
B could be an option, by installing a logging package on alle managed systems/ECs etc. https://docs.aws.amazon.com/systems-manager/latest/userguide/distributor-working-with-packages-deploy.html
However, as it mentions the "Session manager logs" I would tend towards A.
upvoted 1 times
MrAWSAssociate 1 week, 1 day ago
It should be "A".
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-logging.html
upvoted 1 times
secdgs 2 weeks ago
It have menu to Enable S3 Logging.
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-logging.html#session-manager-logging-s3
upvoted 1 times
Markie999 3 weeks ago
BBBBBBBBB
upvoted 1 times
Bill1000 3 weeks, 1 day ago
The option 'A' says "Enable S3 logging in the Systems Manager console." This means that you will enable the logs !! FOR !! S3 events and its is not what the question asks. My vote is for Option B, based on this article: https://docs.aws.amazon.com/AmazonS3/latest/userguide/logging-with-
S3.html
upvoted 1 times
vrevkov 1 week, 2 days ago
But where do you want to install the Amazon CloudWatch agent in case of B?
upvoted 1 times
omoakin 4 weeks ago
DDDDDD
upvoted 1 times
Anmol_1010 1 month, 1 week ago
Option D is definetely not right,
Its optiom B
upvoted 1 times
omoakin 1 month, 1 week ago
Chat GPT says option A is incorrect cos it requires enabling S3 logging in the system manager console only logs information about the systems manager service not the session logs
Says correct answer is B
upvoted 1 times
RainWhisper 1 month ago
Question may not be very clear. A should be the answer. Below link is the documetation: https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-logging.html#session-manager-logging-s3
upvoted 3 times
cloudenthusiast 1 month, 1 week ago
option A does not involve CloudWatch, while option D does. Therefore, in terms of operational overhead, option A would generally have less complexity and operational overhead compared to option D.
Option A simply enables S3 logging in the Systems Manager console, allowing you to directly send session logs to an S3 bucket. This approach is straightforward and requires minimal configuration.
On the other hand, option D involves installing and configuring the Amazon CloudWatch agent, creating a CloudWatch log group, setting up a CloudWatch Logs subscription, and configuring an Amazon Kinesis Data Firehose delivery stream to store logs in an S3 bucket. This requires additional setup and management compared to option A.
So, if minimizing operational overhead is a priority, option A would be a simpler and more straightforward choice.
upvoted 3 times
nosense 1 month, 1 week ago
A MOST operational efficiency?
upvoted 3 times
Question #518 Topic 1
An application uses an Amazon RDS MySQL DB instance. The RDS database is becoming low on disk space. A solutions architect wants to increase the disk space without downtime.
Which solution meets these requirements with the LEAST amount of effort?
A. Enable storage autoscaling in RDS
B. Increase the RDS database instance size
C. Change the RDS database instance storage type to Provisioned IOPS
D. Back up the RDS database, increase the storage capacity, restore the database, and stop the previous instance
Community vote distribution
A (100%)
cloudenthusiast Highly Voted 1 month, 1 week ago
Enabling storage autoscaling allows RDS to automatically adjust the storage capacity based on the application's needs. When the storage usage exceeds a predefined threshold, RDS will automatically increase the allocated storage without requiring manual intervention or causing downtime. This ensures that the RDS database has sufficient disk space to handle the increasing storage requirements.
upvoted 7 times
haoAWS Most Recent 2 days, 3 hours ago
A is the best answer.
B will not work for increasing disk space, it only improve the IO performance.
C will not work because it will cause downtime.
D is too complicated and need much operational effort.
upvoted 1 times
RainWhisper 1 month ago
https://aws.amazon.com/about-aws/whats-new/2019/06/rds-storage-auto-scaling/
upvoted 1 times
Anmol_1010 1 month, 1 week ago
The key word is No Down time. A would be bewt option
upvoted 2 times
Question #519 Topic 1
A consulting company provides professional services to customers worldwide. The company provides solutions and tools for customers to expedite gathering and analyzing data on AWS. The company needs to centrally manage and deploy a common set of solutions and tools for customers to use for self-service purposes.
Which solution will meet these requirements?
A. Create AWS CloudFormation templates for the customers.
B. Create AWS Service Catalog products for the customers.
C. Create AWS Systems Manager templates for the customers.
D. Create AWS Config items for the customers.
Community vote distribution
B (100%)
Yadav_Sanjay 1 month ago
cloudenthusiast 1 month, 1 week ago
AWS Service Catalog allows you to create and manage catalogs of IT services that can be deployed within your organization. With Service Catalog, you can define a standardized set of products (solutions and tools in this case) that customers can self-service provision. By creating Service Catalog products, you can control and enforce the deployment of approved and validated solutions and tools.
upvoted 4 times
Question #520 Topic 1
A company is designing a new web application that will run on Amazon EC2 Instances. The application will use Amazon DynamoDB for backend data storage. The application traffic will be unpredictable. The company expects that the application read and write throughput to the database will be moderate to high. The company needs to scale in response to application traffic.
Which DynamoDB table configuration will meet these requirements MOST cost-effectively?
A. Configure DynamoDB with provisioned read and write by using the DynamoDB Standard table class. Set DynamoDB auto scaling to a maximum defined capacity.
B. Configure DynamoDB in on-demand mode by using the DynamoDB Standard table class.
C. Configure DynamoDB with provisioned read and write by using the DynamoDB Standard Infrequent Access (DynamoDB Standard-IA) table class. Set DynamoDB auto scaling to a maximum defined capacity.
D. Configure DynamoDB in on-demand mode by using the DynamoDB Standard Infrequent Access (DynamoDB Standard-IA) table class.
Community vote distribution
B (60%) A (30%) 10%
wRhlH 15 hours, 25 minutes ago
Not B for sure, "The company needs to scale in response to application traffic."
Between A and C, I would choose C. Because it's a new application, and the traffic will be from moderate to high. So by choosing C, it's both cost-effecitve and scalable
upvoted 1 times
live_reply_developers 2 days, 17 hours ago
"With provisioned capacity mode, you specify the number of reads and writes per second that you expect your application to require, and you are billed based on that. Furthermore if you can forecast your capacity requirements you can also reserve a portion of DynamoDB provisioned capacity and optimize your costs even further.
With provisioned capacity you can also use auto scaling to automatically adjust your table’s capacity based on the specified utilization rate to ensure application performance, and also to potentially reduce costs. To configure auto scaling in DynamoDB, set the minimum and maximum levels of read and write capacity in addition to the target utilization percentage."
https://docs.aws.amazon.com/wellarchitected/latest/serverless-applications-lens/capacity.html
upvoted 1 times
F629 4 days, 14 hours ago
I think it's A. B is on-demand, but it may not save money. If it's a not-busy application, on-demand may save money, but to a medium to high busy level application, I prefer a provisioned.
upvoted 1 times
Rob1L 1 month, 1 week ago
cloudenthusiast 1 month, 1 week ago
AWS Service Catalog allows you to create and manage catalogs of IT services that can be deployed within your organization. With Service Catalog, you can define a standardized set of products (solutions and tools in this case) that customers can self-service provision. By creating Service Catalog products, you can control and enforce the deployment of approved and validated solutions and tools.
upvoted 3 times
cloudenthusiast 1 month, 1 week ago
On-Demand Mode: With on-demand mode, DynamoDB automatically scales its capacity to handle the application's traffic. DynamoDB Standard Table Class: The DynamoDB Standard table class provides a balance between cost and performance.
Cost-Effectiveness: By using on-demand mode, the company only pays for the actual read and write requests made to the table, rather than provisioning and paying for a fixed amount of capacity units in advance.
upvoted 3 times
Efren 1 month, 1 week ago
B for me. Provisioned if we know how much traffic will come, but its unpredictable, so we have to go for on-demand
upvoted 3 times
nosense 1 month, 1 week ago
nosense 1 month, 1 week ago
changed for C.
Option A: need to purchase more capacity than they actually need This would lead to unnecessary costs.
Option B: company's application is expected to have moderate to high read and write throughput, so this option would not be sufficient.
C Configure DynamoDB with provisioned read and write by using the DynamoDB Standard Infrequent Access (DynamoDB Standard-IA) table class. Set DynamoDB auto scaling to a maximum defined capacity.
upvoted 1 times
Question #521 Topic 1
A retail company has several businesses. The IT team for each business manages its own AWS account. Each team account is part of an organization in AWS Organizations. Each team monitors its product inventory levels in an Amazon DynamoDB table in the team's own AWS account.
The company is deploying a central inventory reporting application into a shared AWS account. The application must be able to read items from all the teams' DynamoDB tables.
Which authentication option will meet these requirements MOST securely?
Integrate DynamoDB with AWS Secrets Manager in the inventory application account. Configure the application to use the correct secret from Secrets Manager to authenticate and read the DynamoDB table. Schedule secret rotation for every 30 days.
In every business account, create an IAM user that has programmatic access. Configure the application to use the correct IAM user access key ID and secret access key to authenticate and read the DynamoDB table. Manually rotate IAM access keys every 30 days.
In every business account, create an IAM role named BU_ROLE with a policy that gives the role access to the DynamoDB table and a trust policy to trust a specific role in the inventory application account. In the inventory account, create a role named APP_ROLE that allows access to the STS AssumeRole API operation. Configure the application to use APP_ROLE and assume the crossaccount role BU_ROLE to read the DynamoDB table.
Integrate DynamoDB with AWS Certificate Manager (ACM). Generate identity certificates to authenticate DynamoDB. Configure the application to use the correct certificate to authenticate and read the DynamoDB table.
Community vote distribution
C (100%)
cloudenthusiast Highly Voted 1 month, 1 week ago
IAM Roles: IAM roles provide a secure way to grant permissions to entities within AWS. By creating an IAM role in each business account named BU_ROLE with the necessary permissions to access the DynamoDB table, the access can be controlled at the IAM role level.
Cross-Account Access: By configuring a trust policy in the BU_ROLE that trusts a specific role in the inventory application account (APP_ROLE), you establish a trusted relationship between the two accounts.
Least Privilege: By creating a specific IAM role (BU_ROLE) in each business account and granting it access only to the required DynamoDB table, you can ensure that each team's table is accessed with the least privilege principle.
Security Token Service (STS): The use of STS AssumeRole API operation in the inventory application account allows the application to assume the cross-account role (BU_ROLE) in each business account.
upvoted 8 times
mattcl Most Recent 4 days, 2 hours ago
Why not A?
upvoted 1 times
antropaws 6 days, 16 hours ago
It's complex, but looks C.
upvoted 1 times
eehhssaan 1 month, 1 week ago
i'll go with C .. coming from two minds
upvoted 2 times
nosense 1 month, 1 week ago
a or c. C looks like a more secure
upvoted 1 times
omoakin 1 month, 1 week ago
CCCCCCCCCCC
upvoted 1 times
Question #522 Topic 1
A company runs container applications by using Amazon Elastic Kubernetes Service (Amazon EKS). The company's workload is not consistent throughout the day. The company wants Amazon EKS to scale in and out according to the workload.
Which combination of steps will meet these requirements with the LEAST operational overhead? (Choose two.)
Use an AWS Lambda function to resize the EKS cluster.
Use the Kubernetes Metrics Server to activate horizontal pod autoscaling.
Use the Kubernetes Cluster Autoscaler to manage the number of nodes in the cluster.
Use Amazon API Gateway and connect it to Amazon EKS.
Use AWS App Mesh to observe network activity.
Community vote distribution
BC (100%)
cloudenthusiast 1 month, 1 week ago
By combining the Kubernetes Cluster Autoscaler (option C) to manage the number of nodes in the cluster and enabling horizontal pod autoscaling (option B) with the Kubernetes Metrics Server, you can achieve automatic scaling of your EKS cluster and container applications based on workload demand. This approach minimizes operational overhead as it leverages built-in Kubernetes functionality and automation mechanisms.
upvoted 3 times
nosense 1 month, 1 week ago
Question #523 Topic 1
A company runs a microservice-based serverless web application. The application must be able to retrieve data from multiple Amazon DynamoDB tables A solutions architect needs to give the application the ability to retrieve the data with no impact on the baseline performance of the
application.
Which solution will meet these requirements in the MOST operationally efficient way?
AWS AppSync pipeline resolvers
Amazon CloudFront with Lambda@Edge functions
Edge-optimized Amazon API Gateway with AWS Lambda functions
Amazon Athena Federated Query with a DynamoDB connector
Community vote distribution
D (56%) B (44%)
omoakin Highly Voted 1 month, 1 week ago
Great work made it to the last question. Goodluck to you all
upvoted 11 times
MostofMichelle 3 weeks, 5 days ago
good luck to you as well.
upvoted 4 times
wRhlH Most Recent 2 days, 20 hours ago
Why not c
upvoted 1 times
DrWatson 3 weeks, 1 day ago
https://docs.aws.amazon.com/athena/latest/ug/connectors-dynamodb.html
upvoted 2 times
Rashi5778 3 weeks, 2 days ago
AWS AppSync pipeline resolvers, is the correct choice for retrieving data from multiple DynamoDB tables with no impact on the baseline performance of the microservice-based serverless web application.
upvoted 1 times
Buba26 3 weeks, 2 days ago
Good luck to everyone whom came this far
upvoted 1 times
Abrar2022 3 weeks, 2 days ago
all the best to ALL of you!!!
upvoted 1 times
elmogy 4 weeks ago
just passed yesterday 30-05-23, around 75% of the exam came from here, some with light changes.
upvoted 4 times
Efren 1 month ago
HI team, passed the exam as well. Studying the questions, comments and of course making labs, some training , you should be safe. Dont rely just on questions pls
upvoted 1 times
y0 1 month, 1 week ago
The Amazon Athena DynamoDB connector enables Amazon Athena to communicate with DynamoDB so that you can query your tables with SQL. Write operations like INSERT INTO are not supported.
upvoted 3 times
fakrap 1 month, 1 week ago
Good luck to everyone, taking the exam in about 20 hours time.
upvoted 4 times
fakrap 1 month ago
Just in case you are wondering, yeap.. I passed!
upvoted 11 times
MostofMichelle 3 weeks, 5 days ago
woohoo!
upvoted 2 times
omoakin 1 month, 1 week ago
BBBBBBBBBBB
upvoted 1 times
omoakin 1 month, 1 week ago
Quick data retr
upvoted 1 times
Anmol_1010 1 month, 1 week ago
It says D om gpt
upvoted 1 times
cloudenthusiast 1 month, 1 week ago
By using CloudFront with Lambda@Edge, you can benefit from the distributed CDN infrastructure, reduce the load on DynamoDB, and retrieve data with low latency. The use of caching also helps to minimize the impact on baseline performance and improve the overall efficiency of data retrieval in your application.
upvoted 3 times
nosense 1 month, 1 week ago
agree with a
upvoted 3 times
dydzah 1 month, 1 week ago https://aws.amazon.com/blogs/mobile/appsync-pipeline-resolvers-2/ upvoted 1 times
Question #524 Topic 1
A company wants to analyze and troubleshoot Access Denied errors and Unauthorized errors that are related to IAM permissions. The company has AWS CloudTrail turned on.
Which solution will meet these requirements with the LEAST effort?
Use AWS Glue and write custom scripts to query CloudTrail logs for the errors.
Use AWS Batch and write custom scripts to query CloudTrail logs for the errors.
Search CloudTrail logs with Amazon Athena queries to identify the errors.
Search CloudTrail logs with Amazon QuickSight. Create a dashboard to identify the errors.
Community vote distribution
C (50%) D (50%)
manuh 1 day, 1 hour ago
Dashboard isnt requires. Also refer to this https://repost.aws/knowledge-center/troubleshoot-iam-permission-errors
upvoted 1 times
haoAWS 1 day, 21 hours ago
I am struggling for the C and D for a long time, and ask the chatGPT. The chatGPT says D is better, since Athena requires more expertise on SQL.
upvoted 1 times
antropaws 6 days, 11 hours ago
Both C and D are feasible. I vote for D:
Amazon QuickSight supports logging the following actions as events in CloudTrail log files:
Whether the request was made with root or AWS Identity and Access Management user credentials
Whether the request was made with temporary security credentials for an IAM role or federated user
Whether the request was made by another AWS service
https://docs.aws.amazon.com/quicksight/latest/user/logging-using-cloudtrail.html
upvoted 1 times
PCWu 1 week, 5 days ago
The Answer will be C:
Need to use Athena to query keywords and sort out the error logs. D: No need to use Amazon QuickSight to create the dashboard.
upvoted 1 times
Axeashes 1 week, 5 days ago
"Using Athena with CloudTrail logs is a powerful way to enhance your analysis of AWS service activity." https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html
upvoted 1 times
oras2023 2 weeks, 4 days ago
Analyse and TROUBLESHOOT, look like Athena
upvoted 1 times
oras2023 1 week, 6 days ago https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html upvoted 1 times
alexandercamachop 3 weeks ago
It specifies analyze, not query logs.
Which is why option D is the best one as it provides dashboards to analyze the logs.
upvoted 2 times
Question #525 Topic 1
A company wants to add its existing AWS usage cost to its operation cost dashboard. A solutions architect needs to recommend a solution that will give the company access to its usage cost programmatically. The company must be able to access cost data for the current year and forecast costs for the next 12 months.
Which solution will meet these requirements with the LEAST operational overhead?
Access usage cost-related data by using the AWS Cost Explorer API with pagination.
Access usage cost-related data by using downloadable AWS Cost Explorer report .csv files.
Configure AWS Budgets actions to send usage cost data to the company through FTP.
Create AWS Budgets reports for usage cost data. Send the data to the company through SMTP.
Community vote distribution
A (100%)
MrAWSAssociate 1 week, 1 day ago
From AWS Documentation*:
"You can view your costs and usage using the Cost Explorer user interface free of charge. You can also access your data programmatically using the Cost Explorer API. Each paginated API request incurs a charge of $0.01. You can't disable Cost Explorer after you enable it."
* Source:
https://docs.aws.amazon.com/cost-management/latest/userguide/ce-what-is.html https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-cost-explorer/interfaces/costexplorerpaginationconfiguration.html
upvoted 3 times
alexandercamachop 3 weeks ago
Answer is: A
says dashboard = Cost Explorer, therefor C & D are eliminated.
also says programmatically, means non manual intervention therefor API.
upvoted 4 times
oras2023 3 weeks, 1 day ago
least operational overhead = API access
upvoted 3 times
oras2023 3 weeks, 1 day ago
least operational overhead = API access
upvoted 1 times
Question #526
Topic 1
A solutions architect is reviewing the resilience of an application. The solutions architect notices that a database administrator recently failed over the application's Amazon Aurora PostgreSQL database writer instance as part of a scaling exercise. The failover resulted in 3 minutes of
downtime for the application.
Which solution will reduce the downtime for scaling exercises with the LEAST operational overhead?
Create more Aurora PostgreSQL read replicas in the cluster to handle the load during failover.
Set up a secondary Aurora PostgreSQL cluster in the same AWS Region. During failover, update the application to use the secondary cluster's writer endpoint.
Create an Amazon ElastiCache for Memcached cluster to handle the load during failover.
Set up an Amazon RDS proxy for the database. Update the application to use the proxy endpoint.
Community vote distribution
D (100%)
alexandercamachop 3 weeks ago
D is the correct answer.
It is talking about the write database. Not reader.
Amazon RDS proxy allows you to automatically route write request to the healthy writer, minimizing downtime.
upvoted 3 times
AshishRocks 3 weeks, 1 day ago
Set up an Amazon RDS proxy for the database. Update the application to use the proxy endpoint.
D is the answer
upvoted 2 times
Question #527 Topic 1
A company has a regional subscription-based streaming service that runs in a single AWS Region. The architecture consists of web servers and application servers on Amazon EC2 instances. The EC2 instances are in Auto Scaling groups behind Elastic Load Balancers. The architecture
includes an Amazon Aurora global database cluster that extends across multiple Availability Zones. The company wants to expand globally and to ensure that its application has minimal downtime.
Which solution will provide the MOST fault tolerance?
Extend the Auto Scaling groups for the web tier and the application tier to deploy instances in Availability Zones in a second Region. Use an Aurora global database to deploy the database in the primary Region and the second Region. Use Amazon Route 53 health checks with a failover routing policy to the second Region.
Deploy the web tier and the application tier to a second Region. Add an Aurora PostgreSQL cross-Region Aurora Replica in the second
Region. Use Amazon Route 53 health checks with a failover routing policy to the second Region. Promote the secondary to primary as needed.
Deploy the web tier and the application tier to a second Region. Create an Aurora PostgreSQL database in the second Region. Use AWS Database Migration Service (AWS DMS) to replicate the primary database to the second Region. Use Amazon Route 53 health checks with a failover routing policy to the second Region.
Deploy the web tier and the application tier to a second Region. Use an Amazon Aurora global database to deploy the database in the primary Region and the second Region. Use Amazon Route 53 health checks with a failover routing policy to the second Region. Promote the secondary to primary as needed.
Community vote distribution
D (67%) B (17%) A (17%)
Zuit 9 hours, 47 minutes ago
D seems fitting: Global Databbase and deploying it in the new region
upvoted 1 times
MrAWSAssociate 1 week, 1 day ago
manuh 1 day ago
Replicated db doesnt mean they will act as a single db once the transfer is completed. Global db is the correct approach
upvoted 1 times
r3mo 2 weeks, 2 days ago
"D" is the answer: because Aws Aurora Global Database allows you to read and write from any region in the global cluster. This enables you to distribute read and write workloads globally, improving performance and reducing latency. Data is replicated synchronously across regions, ensuring strong consistency.
upvoted 3 times
Henrytml 2 weeks, 3 days ago
A is the only answer remain using ELB, both Web/App/DB has been taking care with replicating in 2nd region, lastly route 53 for failover over multiple regions
upvoted 1 times
manuh 1 day ago
also Asg cant span beyond a region
upvoted 1 times
Henrytml 1 week, 5 days ago
i will revoke my answer to standby web in 2nd region, instead of trigger to scale out
upvoted 1 times
alexandercamachop 3 weeks ago
B&C are discarted.
The answer is between A and D.
I would go with D because it explicitley created this web / app tier in second region, instead A just autoscales into a secondary region, rather always having resources in this second region.
upvoted 3 times
then
Question #528 Topic 1
A data analytics company wants to migrate its batch processing system to AWS. The company receives thousands of small data files periodically during the day through FTP. An on-premises batch job processes the data files overnight. However, the batch job takes hours to finish running.
The company wants the AWS solution to process incoming data files as soon as possible with minimal changes to the FTP clients that send the files. The solution must delete the incoming data files after the files have been processed successfully. Processing for each file needs to take 3-8 minutes.
Which solution will meet these requirements in the MOST operationally efficient way?
Use an Amazon EC2 instance that runs an FTP server to store incoming files as objects in Amazon S3 Glacier Flexible Retrieval. Configure a job queue in AWS Batch. Use Amazon EventBridge rules to invoke the job to process the objects nightly from S3 Glacier Flexible Retrieval. Delete the objects after the job has processed the objects.
Use an Amazon EC2 instance that runs an FTP server to store incoming files on an Amazon Elastic Block Store (Amazon EBS) volume. Configure a job queue in AWS Batch. Use Amazon EventBridge rules to invoke the job to process the files nightly from the EBS volume. Delete the files after the job has processed the files.
Use AWS Transfer Family to create an FTP server to store incoming files on an Amazon Elastic Block Store (Amazon EBS) volume. Configure a job queue in AWS Batch. Use an Amazon S3 event notification when each file arrives to invoke the job in AWS Batch. Delete the files after the job has processed the files.
Use AWS Transfer Family to create an FTP server to store incoming files in Amazon S3 Standard. Create an AWS Lambda function to process the files and to delete the files after they are processed. Use an S3 event notification to invoke the Lambda function when the files arrive.
Community vote distribution
D (86%) 14%
antropaws 6 days, 11 hours ago
r3mo 2 weeks, 2 days ago
"D" Since each file takes 3-8 minutes to process the lambda function can process the data file whitout a problem.
upvoted 1 times
maver144 2 weeks, 2 days ago
You cannot setup AWS Transfer Family to save files into EBS.
upvoted 3 times
oras2023 2 weeks ago https://aws.amazon.com/aws-transfer-family/ upvoted 1 times
secdgs 2 weeks, 2 days ago
Because
process immediate when file transfer to S3 not wait for process several file in one time.
takes 3-8 can use Lamda.
C. Wrong because AWS Batch is use for run large-scale or large amount of data in one time.
upvoted 1 times
Aymanovitchy 2 weeks, 6 days ago
To meet the requirements of processing incoming data files as soon as possible with minimal changes to the FTP clients, and deleting the files after successful processing, the most operationally efficient solution would be:
D. Use AWS Transfer Family to create an FTP server to store incoming files in Amazon S3 Standard. Create an AWS Lambda function to process the files and delete them after processing. Use an S3 event notification to invoke the Lambda function when the files arrive.
upvoted 1 times
bajwa360 2 weeks, 6 days ago
It should be D as lambda is more operationally viable solution given the fact each processing takes 3-8 minutes that lambda can handle
upvoted 1 times
alexandercamachop 3 weeks ago
Answer has to be between C or D.
Because Transfer Family is obvious do to FTP.
Now i would go with C because it uses AWS Batch, which makes more sense for Batch processing rather then AWS Lambda.
upvoted 1 times
Bill1000 3 weeks, 1 day ago
I am between C and D. My reason is:
"The company wants the AWS solution to process incoming data files <b>as soon as possible</b> with minimal changes to the FTP clients that send the files."
upvoted 2 times
Question #529 Topic 1
A company is migrating its workloads to AWS. The company has transactional and sensitive data in its databases. The company wants to use AWS Cloud solutions to increase security and reduce operational overhead for the databases.
Which solution will meet these requirements?
A. Migrate the databases to Amazon EC2. Use an AWS Key Management Service (AWS KMS) AWS managed key for encryption.
B. Migrate the databases to Amazon RDS Configure encryption at rest.
C. Migrate the data to Amazon S3 Use Amazon Macie for data security and protection
D. Migrate the database to Amazon RDS. Use Amazon CloudWatch Logs for data security and protection.
Community vote distribution
B (100%)
AshishRocks Highly Voted 3 weeks, 1 day ago
B is the answer
Why not C - Option C suggests migrating the data to Amazon S3 and using Amazon Macie for data security and protection. While Amazon Macie provides advanced security features for data in S3, it may not be directly applicable or optimized for databases, especially for transactional and sensitive data. Amazon RDS provides a more suitable environment for managing databases.
upvoted 5 times
alexandercamachop Most Recent 3 weeks ago
B for sure.
First the correct is Amazon RDS, then encryption at rest makes the database secure.
upvoted 2 times
oras2023 3 weeks, 1 day ago
Migrate the databases to Amazon RDS Configure encryption at rest.
Looks like best option
upvoted 3 times
Question #530 Topic 1
A company has an online gaming application that has TCP and UDP multiplayer gaming capabilities. The company uses Amazon Route 53 to point the application traffic to multiple Network Load Balancers (NLBs) in different AWS Regions. The company needs to improve application performance and decrease latency for the online game in preparation for user growth.
Which solution will meet these requirements?
A. Add an Amazon CloudFront distribution in front of the NLBs. Increase the Cache-Control max-age parameter.
B. Replace the NLBs with Application Load Balancers (ALBs). Configure Route 53 to use latency-based routing.
C. Add AWS Global Accelerator in front of the NLBs. Configure a Global Accelerator endpoint to use the correct listener ports.
D. Add an Amazon API Gateway endpoint behind the NLBs. Enable API caching. Override method caching for the different stages.
Community vote distribution
C (100%)
Henrytml 1 week, 5 days ago
only b and c handle TCP/UDP, and C comes with accelerator to enhance performance
upvoted 1 times
manuh 1 day ago
Does alb handle udp? Can u share a source?
upvoted 1 times
alexandercamachop 3 weeks ago
UDP and TCP is AWS Global accelarator as it works in the Transportation layer. Now this with NLB is perfect.
upvoted 2 times
oras2023 3 weeks, 1 day ago
C is helping to reduce latency for end clients
upvoted 2 times
Question #531 Topic 1
A company needs to integrate with a third-party data feed. The data feed sends a webhook to notify an external service when new data is ready for consumption. A developer wrote an AWS Lambda function to retrieve data when the company receives a webhook callback. The developer must
make the Lambda function available for the third party to call.
Which solution will meet these requirements with the MOST operational efficiency?
Create a function URL for the Lambda function. Provide the Lambda function URL to the third party for the webhook.
Deploy an Application Load Balancer (ALB) in front of the Lambda function. Provide the ALB URL to the third party for the webhook.
Create an Amazon Simple Notification Service (Amazon SNS) topic. Attach the topic to the Lambda function. Provide the public hostname of the SNS topic to the third party for the webhook.
Create an Amazon Simple Queue Service (Amazon SQS) queue. Attach the queue to the Lambda function. Provide the public hostname of the SQS queue to the third party for the webhook.
Community vote distribution
A (100%)
Abrar2022 1 week, 3 days ago
key word: Lambda function URLs
upvoted 1 times
maver144 2 weeks, 2 days ago
jkhan2405 2 weeks, 3 days ago
It's A
upvoted 1 times
alexandercamachop 3 weeks ago
A would seem like the correct one but not sure.
upvoted 1 times
Question #532 Topic 1
A company has a workload in an AWS Region. Customers connect to and access the workload by using an Amazon API Gateway REST API. The company uses Amazon Route 53 as its DNS provider. The company wants to provide individual and secure URLs for all customers.
Which combination of steps will meet these requirements with the MOST operational efficiency? (Choose three.)
Register the required domain in a registrar. Create a wildcard custom domain name in a Route 53 hosted zone and record in the zone that points to the API Gateway endpoint.
Request a wildcard certificate that matches the domains in AWS Certificate Manager (ACM) in a different Region.
Create hosted zones for each customer as required in Route 53. Create zone records that point to the API Gateway endpoint.
Request a wildcard certificate that matches the custom domain name in AWS Certificate Manager (ACM) in the same Region.
Create multiple API endpoints for each customer in API Gateway.
Create a custom domain name in API Gateway for the REST API. Import the certificate from AWS Certificate Manager (ACM).
Community vote distribution
ADF (100%)
AshishRocks 1 week, 6 days ago
Step A involves registering the required domain in a registrar and creating a wildcard custom domain name in a Route 53 hosted zone. This allows you to map individual and secure URLs for all customers to your API Gateway endpoints.
Step D is to request a wildcard certificate from AWS Certificate Manager (ACM) that matches the custom domain name you created in Step A. This wildcard certificate will cover all subdomains and ensure secure HTTPS communication.
Step F is to create a custom domain name in API Gateway for your REST API. This allows you to associate the custom domain name with your API Gateway endpoints and import the certificate from ACM for secure communication.
upvoted 1 times
jkhan2405 2 weeks, 3 days ago
It's ADF
upvoted 2 times
MAMADOUG 2 weeks, 5 days ago
For me AFD
upvoted 1 times
alexandercamachop 3 weeks ago
ADF - One to create the custom domain in Route 53 (Amazon DNS) Second to request wildcard certificate from ADM
Thirds to import the certificate from ACM.
upvoted 1 times
AncaZalog 3 weeks ago
is ADF
upvoted 1 times
Question #533 Topic 1
A company stores data in Amazon S3. According to regulations, the data must not contain personally identifiable information (PII). The company recently discovered that S3 buckets have some objects that contain PII. The company needs to automatically detect PII in S3 buckets and to notify the company’s security team.
Which solution will meet these requirements?
Use Amazon Macie. Create an Amazon EventBridge rule to filter the SensitiveData event type from Macie findings and to send an Amazon Simple Notification Service (Amazon SNS) notification to the security team.
Use Amazon GuardDuty. Create an Amazon EventBridge rule to filter the CRITICAL event type from GuardDuty findings and to send an Amazon Simple Notification Service (Amazon SNS) notification to the security team.
Use Amazon Macie. Create an Amazon EventBridge rule to filter the SensitiveData:S3Object/Personal event type from Macie findings and to send an Amazon Simple Queue Service (Amazon SQS) notification to the security team.
Use Amazon GuardDuty. Create an Amazon EventBridge rule to filter the CRITICAL event type from GuardDuty findings and to send an Amazon Simple Queue Service (Amazon SQS) notification to the security team.
Community vote distribution
A (100%)
kapit 1 week, 1 day ago
AAAAAAA
upvoted 1 times
jack79 2 weeks ago
C https://docs.aws.amazon.com/macie/latest/user/findings-types.html and notice the ensitiveData:S3Object/Personal
The object contains personally identifiable information (such as mailing addresses or driver's license identification numbers), personal health information (such as health insurance or medical identification numbers), or a combination of the two.
upvoted 2 times
MAMADOUG 2 weeks, 5 days ago
I vote for A, Sensitive = MACIE, and SNS to prevent Security Team
upvoted 2 times
alexandercamachop 3 weeks ago
B and D are discarted as Macie is to identify PII. Now that we have between A and C.
SNS is more suitable for this option as a pub/sub service, we subscribe the security team and then they will receive the notifications.
upvoted 4 times
Question #534 Topic 1
A company wants to build a logging solution for its multiple AWS accounts. The company currently stores the logs from all accounts in a centralized account. The company has created an Amazon S3 bucket in the centralized account to store the VPC flow logs and AWS CloudTrail logs. All logs must be highly available for 30 days for frequent analysis, retained for an additional 60 days for backup purposes, and deleted 90 days after creation.
Which solution will meet these requirements MOST cost-effectively?
Transition objects to the S3 Standard storage class 30 days after creation. Write an expiration action that directs Amazon S3 to delete objects after 90 days.
Transition objects to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class 30 days after creation. Move all objects to the S3 Glacier Flexible Retrieval storage class after 90 days. Write an expiration action that directs Amazon S3 to delete objects after 90 days.
Transition objects to the S3 Glacier Flexible Retrieval storage class 30 days after creation. Write an expiration action that directs Amazon S3 to delete objects after 90 days.
Transition objects to the S3 One Zone-Infrequent Access (S3 One Zone-IA) storage class 30 days after creation. Move all objects to the S3 Glacier Flexible Retrieval storage class after 90 days. Write an expiration action that directs Amazon S3 to delete objects after 90 days.
Community vote distribution
C (78%) 11% 11%
alexandercamachop Highly Voted 3 weeks ago
C seems the most sutiable. Is the lowest cost.
After 30 days is backup only, doesn't specify frequent access.
Therefor we must transition the items after 30 days to Glacier Flexible Retrieval.
Also it says deletion after 90 days, so all answers specifying a transition after 90 days makes no sense.
upvoted 6 times
MAMADOUG 2 weeks, 5 days ago
Agree with you
upvoted 2 times
antropaws Most Recent 6 days, 11 hours ago
y0eri 1 week, 5 days ago
Question says "All logs must be highly available for 30 days for frequent analysis" I think the answer is A. Glacier is not made for frequent access.
upvoted 1 times
y0eri 1 week, 5 days ago
I take that back. Moderator, please delete my comment.
upvoted 3 times
KMohsoe 2 weeks, 1 day ago
Question #535 Topic 1
A company is building an Amazon Elastic Kubernetes Service (Amazon EKS) cluster for its workloads. All secrets that are stored in Amazon EKS must be encrypted in the Kubernetes etcd key-value store.
Which solution will meet these requirements?
Create a new AWS Key Management Service (AWS KMS) key. Use AWS Secrets Manager to manage, rotate, and store all secrets in Amazon EKS.
Create a new AWS Key Management Service (AWS KMS) key. Enable Amazon EKS KMS secrets encryption on the Amazon EKS cluster.
Create the Amazon EKS cluster with default options. Use the Amazon Elastic Block Store (Amazon EBS) Container Storage Interface (CSI) driver as an add-on.
Create a new AWS Key Management Service (AWS KMS) key with the alias/aws/ebs alias. Enable default Amazon Elastic Block Store (Amazon EBS) volume encryption for the account.
Community vote distribution
B (80%) A (20%)
manuh 23 hours, 51 minutes ago
MrAWSAssociate 1 week, 1 day ago
B is the right option. https://docs.aws.amazon.com/eks/latest/userguide/enable-kms.html
upvoted 1 times
alexandercamachop 3 weeks ago
It is B, because we need to encrypt inside of the EKS cluster, not outside. AWS KMS is to encrypt at rest.
upvoted 3 times
AncaZalog 3 weeks ago
is B, not D
upvoted 2 times
Question #536 Topic 1
A company wants to provide data scientists with near real-time read-only access to the company's production Amazon RDS for PostgreSQL database. The database is currently configured as a Single-AZ database. The data scientists use complex queries that will not affect the production database. The company needs a solution that is highly available.
Which solution will meet these requirements MOST cost-effectively?
Scale the existing production database in a maintenance window to provide enough power for the data scientists.
Change the setup from a Single-AZ to a Multi-AZ instance deployment with a larger secondary standby instance. Provide the data scientists access to the secondary instance.
Change the setup from a Single-AZ to a Multi-AZ instance deployment. Provide two additional read replicas for the data scientists.
Change the setup from a Single-AZ to a Multi-AZ cluster deployment with two readable standby instances. Provide read endpoints to the data scientists.
Community vote distribution
C (50%) D (40%) 10%
manuh 23 hours, 46 minutes ago
Why not b. Shouldnt it have less number of instances than both c and d?
upvoted 1 times
0628atv 3 days, 11 hours ago
D:
https://aws.amazon.com/tw/blogs/database/readable-standby-instances-in-amazon-rds-multi-az-deployments-a-new-high-availability-option/
upvoted 1 times
vrevkov 1 week, 1 day ago
vrevkov 1 week, 1 day ago
I think it's D.
C: Multi-AZ instance = active + standby + two read replicas = 4 RDS instances D: Multi-AZ cluster = Active + two standby = 3 RDS instances
Single-AZ and Multi-AZ deployments: Pricing is billed per DB instance-hour consumed from the time a DB instance is launched until it is stopped or deleted.
https://aws.amazon.com/rds/postgresql/pricing/?pg=pr&loc=3 In the case of a cluster, you will pay less.
upvoted 1 times
Axeashes 1 week, 5 days ago
Multi-AZ instance: the standby instance doesn’t serve any read or write traffic.
Multi-AZ DB cluster: consists of primary instance running in one AZ serving read-write traffic and two other standby running in two different AZs serving read traffic.
https://aws.amazon.com/blogs/database/choose-the-right-amazon-rds-deployment-option-single-az-instance-multi-az-instance-or-multi-az-database-cluster/
upvoted 2 times
oras2023 2 weeks ago
It looks like another question about Multi-AZ cluster/instance deployment, but in this case we no need 40 sec failover so no reasons to look at cluster and buy more resources than we need.
We provide datascience team 2 read replica for their queries.
upvoted 1 times
maver144 2 weeks, 2 days ago
It's either C or D. To be honest, I find the newest questions to be ridiculously hard (roughly 500+). I agree with @alexandercamachop that Multi Az in Instance mode is cheaper than Cluster. However, with Cluster we have reader endpoint available to use out-of-box, so there is no need to
provide read-replicas, which also has its own costs. The ridiculous part is that I'm pretty sure even the AWS support would have troubles to answer which configuration is MOST cost-effective.
upvoted 3 times
manuh 23 hours, 39 minutes ago
Absolutely true that 500+ questions are damn difficult to answer. I still dont know why is B incorrect. Shouldn’t 1 extra be better than 2 ?
upvoted 1 times
maver144 2 weeks, 2 days ago
Near real-time is clue for C, since read replicas are async, but still its not obvious question.
upvoted 2 times
alexandercamachop 3 weeks ago
C.
The question says highly available therefor Multi Az deployment.
Also mentions cost consideration. database instance is cheaper then cluster (D).
Also read replicas is a must since the queries are complex and can slow down the database (question has not complex queries but is a mistake must have been complex queries)
upvoted 4 times
Question #537 Topic 1
A company runs a three-tier web application in the AWS Cloud that operates across three Availability Zones. The application architecture has an Application Load Balancer, an Amazon EC2 web server that hosts user session states, and a MySQL database that runs on an EC2 instance. The
company expects sudden increases in application traffic. The company wants to be able to scale to meet future application capacity demands and to ensure high availability across all three Availability Zones.
Which solution will meet these requirements?
Migrate the MySQL database to Amazon RDS for MySQL with a Multi-AZ DB cluster deployment. Use Amazon ElastiCache for Redis with high availability to store session data and to cache reads. Migrate the web server to an Auto Scaling group that is in three Availability Zones.
Migrate the MySQL database to Amazon RDS for MySQL with a Multi-AZ DB cluster deployment. Use Amazon ElastiCache for Memcached with high availability to store session data and to cache reads. Migrate the web server to an Auto Scaling group that is in three Availability Zones.
Migrate the MySQL database to Amazon DynamoDB Use DynamoDB Accelerator (DAX) to cache reads. Store the session data in DynamoDB. Migrate the web server to an Auto Scaling group that is in three Availability Zones.
Migrate the MySQL database to Amazon RDS for MySQL in a single Availability Zone. Use Amazon ElastiCache for Redis with high availability to store session data and to cache reads. Migrate the web server to an Auto Scaling group that is in three Availability Zones.
Community vote distribution
A (67%) B (33%)
MrAWSAssociate 1 week, 1 day ago
alexandercamachop 3 weeks ago
Memcached is best suited for caching data, while Redis is better for storing data that needs to be persisted. If you need to store data that needs to be accessed frequently, such as user profiles, session data, and application settings, then Redis is the better choice
upvoted 4 times
AncaZalog 3 weeks ago
is A not B
upvoted 3 times
Question #538 Topic 1
A global video streaming company uses Amazon CloudFront as a content distribution network (CDN). The company wants to roll out content in a phased manner across multiple countries. The company needs to ensure that viewers who are outside the countries to which the company rolls out content are not able to view the content.
Which solution will meet these requirements?
Add geographic restrictions to the content in CloudFront by using an allow list. Set up a custom error message.
Set up a new URL tor restricted content. Authorize access by using a signed URL and cookies. Set up a custom error message.
Encrypt the data for the content that the company distributes. Set up a custom error message.
Create a new URL for restricted content. Set up a time-restricted access policy for signed URLs.
Community vote distribution
A (100%)
antropaws 6 days, 10 hours ago
alexandercamachop 3 weeks ago
AncaZalog 3 weeks ago
is B not A
upvoted 1 times
manuh 23 hours, 33 minutes ago
Signed url or cookies can be used for the banner country as well?
upvoted 1 times
antropaws 6 days, 10 hours ago
Why's that?
upvoted 1 times
Question #539 Topic 1
A company wants to use the AWS Cloud to improve its on-premises disaster recovery (DR) configuration. The company's core production business application uses Microsoft SQL Server Standard, which runs on a virtual machine (VM). The application has a recovery point objective (RPO) of 30 seconds or fewer and a recovery time objective (RTO) of 60 minutes. The DR solution needs to minimize costs wherever possible.
Which solution will meet these requirements?
Configure a multi-site active/active setup between the on-premises server and AWS by using Microsoft SQL Server Enterprise with Always On availability groups.
Configure a warm standby Amazon RDS for SQL Server database on AWS. Configure AWS Database Migration Service (AWS DMS) to use change data capture (CDC).
Use AWS Elastic Disaster Recovery configured to replicate disk changes to AWS as a pilot light.
Use third-party backup software to capture backups every night. Store a secondary set of backups in Amazon S3.
Community vote distribution
B (100%)
haoAWS 1 day, 23 hours ago
The answer should be B. ACD cannot make the RPO for only 30 seconds.
upvoted 1 times
haoAWS 1 day, 23 hours ago
Sorry for mistake, A can also make RPO very low, but A is more expensive than B.
upvoted 1 times
MrAWSAssociate 1 week, 1 day ago
I guess this question requires two answers. I think the answers would be both B & D.
upvoted 1 times
haoAWS 1 day, 23 hours ago
D does not make sense since RPO is 30 seconds, back up every night is too long.
upvoted 1 times
Abrar2022 1 week, 3 days ago
Keyword: change data capture (CDC).
upvoted 1 times
alexandercamachop 3 weeks ago
B is the correct one.
C and D are discarted as makes no sense.
Between A and B is because B is RDS which is a manged service, we can use even to pay only for used resources when needed. Leveraging AWS DMS it replicates / syncs the data.
upvoted 3 times
maver144 2 weeks, 2 days ago
C makes sense.
However using AWS Elastic Disaster Recovery configured to replicate disk changes is more likely to be backup & restore then pilot light.
upvoted 1 times
Bill1000 3 weeks, 1 day ago Why 'D'? Can someone explain? How can 'D' meet the 30s RPO?
upvoted 1 times
Question #540 Topic 1
A company has an on-premises server that uses an Oracle database to process and store customer information. The company wants to use an AWS database service to achieve higher availability and to improve application performance. The company also wants to offload reporting from its primary database system.
Which solution will meet these requirements in the MOST operationally efficient way?
Use AWS Database Migration Service (AWS DMS) to create an Amazon RDS DB instance in multiple AWS Regions. Point the reporting functions toward a separate DB instance from the primary DB instance.
Use Amazon RDS in a Single-AZ deployment to create an Oracle database. Create a read replica in the same zone as the primary DB instance. Direct the reporting functions to the read replica.
Use Amazon RDS deployed in a Multi-AZ cluster deployment to create an Oracle database. Direct the reporting functions to use the reader instance in the cluster deployment.
Use Amazon RDS deployed in a Multi-AZ instance deployment to create an Amazon Aurora database. Direct the reporting functions to the reader instances.
Community vote distribution
D (64%) C (36%)
haoAWS 1 day, 23 hours ago
Between C and D, multi-AZ DB cluster does not support Oracle, so only D is correct.
upvoted 1 times
live_reply_developers 2 days, 17 hours ago
Multi-AZ DB clusters are supported only for the MySQL and PostgreSQL DB engines.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/create-multi-az-db-cluster.html
upvoted 3 times
Qjb8m9h 1 week ago C is the answer upvoted 1 times
vrevkov 1 week, 1 day ago
It's D.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RDS_Fea_Regions_DB-eng.Feature.MultiAZDBClusters.html Multi-AZ DB clusters aren't available for Oracle and Aurora is more operationally efficient.
upvoted 3 times
manuh 23 hours, 23 minutes ago
Step to convert schema of oracle to aurora isnt mentioned.
upvoted 1 times
jack79 2 weeks ago
alexandercamachop 3 weeks ago
Use Amazon RDS deployed in a Multi-AZ cluster deployment to create an Oracle database. Direct the reporting functions to use the reader instance in the cluster deployment.
A and B discarted.
The answer is between C and D
D says use an Amazon RDS to build an Amazon Aurora, makes no sense. C is the correct one, high availability in multi az deployment.
Also point the reporting to the reader replica.
upvoted 3 times
Question #541 Topic 1
A company wants to build a web application on AWS. Client access requests to the website are not predictable and can be idle for a long time. Only customers who have paid a subscription fee can have the ability to sign in and use the web application.
Which combination of steps will meet these requirements MOST cost-effectively? (Choose three.)
Create an AWS Lambda function to retrieve user information from Amazon DynamoDB. Create an Amazon API Gateway endpoint to accept RESTful APIs. Send the API calls to the Lambda function.
Create an Amazon Elastic Container Service (Amazon ECS) service behind an Application Load Balancer to retrieve user information from Amazon RDS. Create an Amazon API Gateway endpoint to accept RESTful APIs. Send the API calls to the Lambda function.
Create an Amazon Cognito user pool to authenticate users.
Create an Amazon Cognito identity pool to authenticate users.
Use AWS Amplify to serve the frontend web content with HTML, CSS, and JS. Use an integrated Amazon CloudFront configuration.
Use Amazon S3 static web hosting with PHP, CSS, and JS. Use Amazon CloudFront to serve the frontend web content.
Community vote distribution
ACE (40%) ACF (40%) ADF (20%)
live_reply_developers 2 days, 17 hours ago
S3 doesn't support PHP as stated in answer F.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html
upvoted 1 times
wRhlH 2 days, 17 hours ago
I don't think S3 can handle anything dynamic such as PHP. So I go for ACE
upvoted 1 times
msdnpro 3 days, 23 hours ago
Option B (Amazon ECS) is not the best option since the website "can be idle for a long time", so Lambda (Option A) is a more cost-effective choice. Option D is incorrect because User pools are for authentication (identity verification) while Identity pools are for authorization (access control).
Option F is wrong because S3 web hosting only supports static web files like HTML/CSS, and does not support PHP or JavaScript.
upvoted 1 times
0628atv 3 days, 10 hours ago
https://aws.amazon.com/getting-started/projects/build-serverless-web-app-lambda-apigateway-s3-dynamodb-cognito/module-1/?nc1=h_ls
upvoted 2 times
antropaws 6 days, 10 hours ago
ACF no doubt. Check the difference between user pools and identity pools.
upvoted 1 times
MrAWSAssociate 1 week, 1 day ago
These are the correct answers !
upvoted 1 times
bestedeki 1 week, 4 days ago
A. serverless
identity pools
F. S3 to host static content with CloudFront distribution
upvoted 1 times
oras2023 2 weeks ago
A: long idle = server less
D: authorisation with Identity Pool
F: S3 for static web content with CloudFront distribution as well based on access patterns to data
upvoted 1 times
oras2023 1 week, 6 days ago
ACF:
https://repost.aws/knowledge-center/cognito-user-pools-identity-pools
upvoted 2 times
alexandercamachop 3 weeks ago
ACF
A = Lambda, we pay for our use only, if is idle it won't cost, ECS will always cost. C = Identity pool for users to sign in.
F = It uses S3 to host website which is better cost related and with CloudFront to serve content.
upvoted 3 times
alexandercamachop 3 weeks ago
User pools are for authentication (identity verification). With a user pool, your app users can sign in through the user pool or federate through a third-party identity provider (IdP).
Identity pools are for authorization (access control). You can use identity pools to create unique identities for users and give them access to other AWS services.
I would change the C for D actually.
upvoted 2 times
Question #542 Topic 1
A media company uses an Amazon CloudFront distribution to deliver content over the internet. The company wants only premium customers to have access to the media streams and file content. The company stores all content in an Amazon S3 bucket. The company also delivers content on demand to customers for a specific purpose, such as movie rentals or music downloads.
Which solution will meet these requirements?
Generate and provide S3 signed cookies to premium customers.
Generate and provide CloudFront signed URLs to premium customers.
Use origin access control (OAC) to limit the access of non-premium customers.
Generate and activate field-level encryption to block non-premium customers.
Community vote distribution
B (100%)
haoAWS 1 day, 23 hours ago
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html Notice that A is not correct because it should be CloudFront signed URL, not S3.
upvoted 1 times
antropaws 6 days, 10 hours ago
Why not C?
upvoted 1 times
antropaws 6 days, 10 hours ago
https://aws.amazon.com/blogs/networking-and-content-delivery/amazon-cloudfront-introduces-origin-access-control-oac/
upvoted 1 times
alexandercamachop 3 weeks ago
Signed URLs https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html
upvoted 2 times
haoAWS 1 day, 23 hours ago Then why A is incorrect? upvoted 1 times
Question #543 Topic 1
A company runs Amazon EC2 instances in multiple AWS accounts that are individually bled. The company recently purchased a Savings Pian.
Because of changes in the company’s business requirements, the company has decommissioned a large number of EC2 instances. The company wants to use its Savings Plan discounts on its other AWS accounts.
Which combination of steps will meet these requirements? (Choose two.)
From the AWS Account Management Console of the management account, turn on discount sharing from the billing preferences section.
From the AWS Account Management Console of the account that purchased the existing Savings Plan, turn on discount sharing from the billing preferences section. Include all accounts.
From the AWS Organizations management account, use AWS Resource Access Manager (AWS RAM) to share the Savings Plan with other accounts.
Create an organization in AWS Organizations in a new payer account. Invite the other AWS accounts to join the organization from the management account.
Create an organization in AWS Organizations in the existing AWS account with the existing EC2 instances and Savings Plan. Invite the other AWS accounts to join the organization from the management account.
Community vote distribution
AE (71%) 14% 14%
antropaws 6 days, 10 hours ago
MrAWSAssociate 1 week, 1 day ago
oras2023 2 weeks ago
It's not good practice to create a payer account with any workload so it must be D.
By the reason that we need Organizations for sharing, then we need to turn on its from our PAYER account. (all sub-accounts start share discounts)
upvoted 1 times
oras2023 2 weeks ago
changed to AD
upvoted 1 times
maver144 2 weeks, 2 days ago
@alexandercamachop it is AE. I believe its just typo. RAM is not needed anyhow.
upvoted 3 times
oras2023 2 weeks ago
You are right
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/ri-turn-off.html
upvoted 2 times
alexandercamachop 3 weeks ago
C & E for sure.
In order to share savings plans, we need an organization. Create that organization first and then invite everyone to it. From that console share it other accounts.
upvoted 1 times
Question #544 Topic 1
A retail company uses a regional Amazon API Gateway API for its public REST APIs. The API Gateway endpoint is a custom domain name that points to an Amazon Route 53 alias record. A solutions architect needs to create a solution that has minimal effects on customers and minimal data loss to release the new version of APIs.
Which solution will meet these requirements?
Create a canary release deployment stage for API Gateway. Deploy the latest API version. Point an appropriate percentage of traffic to the canary stage. After API verification, promote the canary stage to the production stage.
Create a new API Gateway endpoint with a new version of the API in OpenAPI YAML file format. Use the import-to-update operation in merge mode into the API in API Gateway. Deploy the new version of the API to the production stage.
Create a new API Gateway endpoint with a new version of the API in OpenAPI JSON file format. Use the import-to-update operation in overwrite mode into the API in API Gateway. Deploy the new version of the API to the production stage.
Create a new API Gateway endpoint with new versions of the API definitions. Create a custom domain name for the new API Gateway API. Point the Route 53 alias record to the new API Gateway API custom domain name.
Community vote distribution
A (100%)
Abrar2022 1 week, 3 days ago
keyword: "latest versions on an api"
Canary release is a software development strategy in which a "new version of an API" (as well as other software) is deployed for testing purposes.
upvoted 2 times
jkhan2405 2 weeks, 3 days ago
It's A
upvoted 1 times
alexandercamachop 3 weeks ago
A. Create a canary release deployment stage for API Gateway. Deploy the latest API version. Point an appropriate percentage of traffic to the canary stage. After API verification, promote the canary stage to the production stage.
Canary release meaning only certain percentage of the users.
upvoted 3 times